You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by "Ratner, Alan S (IS)" <Al...@ngc.com> on 2012/11/21 21:01:55 UTC

HBase Issues (perhaps related to 127.0.0.1)

I'd appreciate any suggestions as to how to get HBase up and running.  Right now it dies after a few seconds on all servers.  I am using Hadoop 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.

History: Yesterday I managed to get HBase 0.94.2 working but only after removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my clocks).  All was fine until this morning when I realized I could not initiate remote log-ins to my servers (using VNC or NX) until I restored the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a non-working HBase.

With HBase managing ZK I see the following in the HBase Master and ZK logs, respectively:
2012-11-21 13:40:22,236 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase

2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running

At roughly the same time (clocks not perfectly synchronized) I see this in a Regionserver log:
2012-11-21 13:40:57,727 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
...
2012-11-21 13:40:57,848 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master

Logs and configuration follows.

Then I tried managing ZK myself and HBase then fails for seemingly different reasons.
2012-11-21 14:46:37,320 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this is not a retry

2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused

Both HMaster error logs (self-managed and me-managed ZK) mention the 127.0.0.1 IP address instead of referring to the server by its name (hadoop1) or its true IP address or simply as localhost.

So, start-hbase.sh works OK (HB managing ZK):
ngc@hadoop1:~/hbase-0.94.2$ bin/start-hbase.sh
hadoop1: starting zookeeper, logging to /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
hadoop2: starting zookeeper, logging to /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
hadoop3: starting zookeeper, logging to /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
starting master, logging to /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
hadoop2: starting regionserver, logging to /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
hadoop6: starting regionserver, logging to /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
hadoop3: starting regionserver, logging to /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
hadoop5: starting regionserver, logging to /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
hadoop4: starting regionserver, logging to /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out

I have in hbase-site.xml:
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
      <property>
            <name>hbase.master</name>
            <value>hadoop1:60000</value>
        </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop1:9000/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/tmp/zookeeper_data</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hadoop1,hadoop2,hadoop3</value>
</property>

I have in hbase-env.sh:
export JAVA_HOME=/home/ngc/jdk1.6.0_25/
export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
export HBASE_HEAPSIZE=2000
export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
export HBASE_MANAGES_ZK=true

>From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 386178
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 386178
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo: HBase 0.94.2
2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo: Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set serverside HConnection retries=100
2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_25
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/home/ngc/jdk1.6.0_25/jre
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic
2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=ngc
2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/ngc
2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/home/ngc/hbase-0.94.2
2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181 sessionTimeout=180000 watcher=master:60000
2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
2012-11-21 13:40:22,099 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 742@hadoop1
2012-11-21 13:40:22,106 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:22,106 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:22,236 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 2000ms before retry #1...
2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.53:2181
2012-11-21 13:40:22,411 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:22,411 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.54:2181
2012-11-21 13:40:22,747 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:22,747 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.52:2181
2012-11-21 13:40:22,967 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:22,967 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:40:24,176 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:24,176 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:24,277 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 4000ms before retry #2...
2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:40:24,767 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:24,767 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:40:25,757 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:25,757 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:40:26,597 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:26,597 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:40:27,775 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:27,775 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:40:28,318 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:28,318 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:28,419 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 8000ms before retry #3...
2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:40:29,106 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:29,106 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:40:30,039 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:30,039 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:40:31,283 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:31,283 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:40:32,143 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:32,143 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:40:32,480 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:32,480 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:40:33,295 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:33,295 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:40:34,962 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:34,962 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:40:35,661 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:35,661 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:40:36,523 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:36,523 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:36,625 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
2012-11-21 13:40:36,625 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
2012-11-21 13:40:36,626 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
      at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
      at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
      at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
      at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
      at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
      at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
      at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
      ... 5 more


>From server hadoop2 (running regionserver, ZK, DN, TT)
Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 193105
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 193105
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo: HBase 0.94.2
2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo: Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
2012-11-21 13:40:57,172 INFO org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
2012-11-21 13:40:57,172 INFO org.apache.hadoop.hbase.util.ServerCommandLine: vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -Dhbase.log.dir=/tmp/hbase-ngc/logs, -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log, -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc, -Dhbase.root.logger=INFO,DRFA, -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64, -Dhbase.security.logger=INFO,DRFAS]
2012-11-21 13:40:57,222 DEBUG org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside HConnection retries=100
2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HRegionServer, port=60020
2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache with maximum size 493.8m
2012-11-21 13:40:57,699 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hadoop2.aj.c2fse.northgrum.com
2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_25
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/home/ngc/jdk1.6.0_25/jre
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=3.0.0-12-generic
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=ngc
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/ngc
2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/home/ngc/hbase-0.94.2
2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181 sessionTimeout=180000 watcher=regionserver:60020
2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.54:2181
2012-11-21 13:40:57,719 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 12835@hadoop2
2012-11-21 13:40:57,727 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:57,727 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:57,848 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 2000ms before retry #1...
2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.53:2181
2012-11-21 13:40:58,283 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:58,283 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:2181
2012-11-21 13:40:58,726 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:58,726 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.52:2181
2012-11-21 13:40:59,368 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:40:59,368 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:41:00,660 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:00,660 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:00,761 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 4000ms before retry #2...
2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:41:01,422 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:01,422 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:41:02,370 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:02,370 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:41:02,627 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:02,627 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:41:03,968 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:03,969 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:41:04,733 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:04,733 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:04,835 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter: Sleeping 8000ms before retry #3...
2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:41:05,741 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:05,741 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:41:06,192 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:06,192 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:41:07,313 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:07,313 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:41:08,273 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:08,273 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:41:09,090 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:09,090 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:41:09,711 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:09,711 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:41:11,121 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:11,121 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:41:11,600 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:11,600 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1/127.0.0.1:2181
2012-11-21 13:41:12,320 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:12,320 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1/127.0.0.1:2181, initiating session
2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
2012-11-21 13:41:12,861 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:12,861 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181, initiating session
2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:12,962 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2012-11-21 13:41:12,962 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries
2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: regionserver:60020 Unable to set watcher on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
      at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
      at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
      at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
      at java.lang.Thread.run(Thread.java:662)
2012-11-21 13:41:12,966 ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020 Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
      at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
      at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
      at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
      at java.lang.Thread.run(Thread.java:662)
2012-11-21 13:41:12,966 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception during initialization, aborting
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
      at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
      at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
      at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
      at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
      at java.lang.Thread.run(Thread.java:662)
2012-11-21 13:41:12,969 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2012-11-21 13:41:12,969 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected exception during initialization, aborting
2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
2012-11-21 13:41:14,834 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:14,834 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
2012-11-21 13:41:15,335 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 13:41:15,335 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181, initiating session
2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2012-11-21 13:41:15,975 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
      at org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
      at java.lang.Thread.run(Thread.java:662)
2012-11-21 13:41:15,976 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2012-11-21 13:41:15,976 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization of RS failed.  Hence aborting RS.
2012-11-21 13:41:15,978 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer MXBean
2012-11-21 13:41:15,980 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
2012-11-21 13:41:15,980 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
2012-11-21 13:41:15,981 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread.
2012-11-21 13:41:15,981 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.

Finally, in the zookeeper log from hadoop1 I have:
Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 386178
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 386178
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2012-11-21 13:40:20,279 INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority quorums
2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes: preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e, name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo: preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e, name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig: preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e, name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration: preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e, name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e, name=log4j:logger=org.apache.hadoop.hbase
2012-11-21 13:40:20,336 INFO org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
2012-11-21 13:40:20,356 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181
2012-11-21 13:40:20,378 INFO org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
2012-11-21 13:40:20,379 INFO org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
2012-11-21 13:40:20,379 INFO org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to 180000
2012-11-21 13:40:20,379 INFO org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
2012-11-21 13:40:20,395 INFO org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2012-11-21 13:40:20,442 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port: 0.0.0.0/0.0.0.0:3888
2012-11-21 13:40:20,456 INFO org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
2012-11-21 13:40:20,458 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id =  0, proposed zxid=0x0
2012-11-21 13:40:20,460 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)
2012-11-21 13:40:20,464 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (1, 0)
2012-11-21 13:40:20,465 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (2, 0)
2012-11-21 13:40:20,663 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (2, 0)
2012-11-21 13:40:20,663 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (1, 0)
2012-11-21 13:40:20,663 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time out: 400
2012-11-21 13:40:21,064 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (2, 0)
2012-11-21 13:40:21,065 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (1, 0)
2012-11-21 13:40:21,065 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time out: 800
2012-11-21 13:40:21,866 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (2, 0)
2012-11-21 13:40:21,866 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (1, 0)
2012-11-21 13:40:21,866 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time out: 1600
2012-11-21 13:40:22,113 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:55216
2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:55216 (no session established for client)
2012-11-21 13:40:22,373 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.64.155.52:60339
2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.64.155.52:60339 (no session established for client)
2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.64.155.52:60342
2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.64.155.52:60342 (no session established for client)
2012-11-21 13:40:23,187 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:55221
2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:55221 (no session established for client)
2012-11-21 13:40:23,467 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (2, 0)
2012-11-21 13:40:23,467 INFO org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server identifier, so dropping the connection: (1, 0)
2012-11-21 13:40:23,467 INFO org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time out: 3200
2012-11-21 13:40:24,116 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /10.64.155.54:35599
2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /10.64.155.54:35599 (no session established for client)
2012-11-21 13:40:24,176 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:55225
...

Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem in /etc/hosts):
Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 386178
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 386178
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo: HBase 0.94.2
2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo: Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set serverside HConnection retries=100
2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-2
2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_25
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/home/ngc/jdk1.6.0_25/jre
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=3.2.0-24-generic
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=ngc
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/ngc
2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/home/ngc/hbase-0.94.2
2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181 sessionTimeout=180000 watcher=master:60000
2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server /10.64.155.54:2181
2012-11-21 14:46:37,087 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 12692@hadoop1
2012-11-21 14:46:37,095 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-11-21 14:46:37,095 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, initiating session
2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid = 0x33b247f4c380000, negotiated timeout = 40000
2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting
2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting
2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting
2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting
2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting
2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting
2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting
2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting
2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting
2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting
2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting
2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60000: starting
2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60000: starting
2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60000: starting
2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=hadoop1,60000,1353527196915
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2012-11-21 14:46:37,299 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master directory
2012-11-21 14:46:37,320 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this is not a retry
2012-11-21 14:46:37,321 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=hadoop1,60000,1353527196915
2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
      at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
      at org.apache.hadoop.ipc.Client.call(Client.java:1075)
      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
      at $Proxy10.getProtocolVersion(Unknown Source)
      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
      at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
      at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
      at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
      at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
      at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
      at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:344)
      at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.ConnectException: Connection refused
      at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
      at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
      at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
      at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
      at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
      at org.apache.hadoop.ipc.Client.call(Client.java:1050)
      ... 18 more
2012-11-21 14:46:47,485 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2012-11-21 14:46:47,486 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60000: exiting
2012-11-21 14:46:47,486 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2012-11-21 14:46:47,487 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2012-11-21 14:46:47,488 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-11-21 14:46:47,488 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-11-21 14:46:47,524 INFO org.apache.zookeeper.ZooKeeper: Session: 0x33b247f4c380000 closed
2012-11-21 14:46:47,524 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
2012-11-21 14:46:47,524 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-11-21 14:46:47,524 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: HMaster Aborted
      at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:154)
      at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)

Alan


RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Try running  $sudo jps or jps as root "#jps"

You will get more info, i.e:

****Jps
***** SecondaryNameNode
***** JobTracker
***** NameNode


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 9:44 AM
To: Harsh J
Cc: ac@hsk.hk; <us...@hadoop.apache.org>
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

I think you are right!



1) $ jps
16152
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
>
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
>
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>>
>> Thanks for your reply.
>>
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>>
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>>
>> Thanks again!
>>
>>
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>>
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>>
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>>
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>>
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>>
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>>
>>>>> Hi Harsh,
>>>>>
>>>>> Thank you very much for your reply, got it!
>>>>>
>>>>> Thanks
>>>>> ac
>>>>>
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>>
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>>
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>>
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>>
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>>
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Harsh J
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>
>
>
> --
> Harsh J

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Try running  $sudo jps or jps as root "#jps"

You will get more info, i.e:

****Jps
***** SecondaryNameNode
***** JobTracker
***** NameNode


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 9:44 AM
To: Harsh J
Cc: ac@hsk.hk; <us...@hadoop.apache.org>
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

I think you are right!



1) $ jps
16152
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
>
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
>
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>>
>> Thanks for your reply.
>>
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>>
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>>
>> Thanks again!
>>
>>
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>>
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>>
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>>
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>>
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>>
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>>
>>>>> Hi Harsh,
>>>>>
>>>>> Thank you very much for your reply, got it!
>>>>>
>>>>> Thanks
>>>>> ac
>>>>>
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>>
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>>
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>>
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>>
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>>
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Harsh J
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>
>
>
> --
> Harsh J

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Try running  $sudo jps or jps as root "#jps"

You will get more info, i.e:

****Jps
***** SecondaryNameNode
***** JobTracker
***** NameNode


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 9:44 AM
To: Harsh J
Cc: ac@hsk.hk; <us...@hadoop.apache.org>
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

I think you are right!



1) $ jps
16152
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
>
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
>
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>>
>> Thanks for your reply.
>>
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>>
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>>
>> Thanks again!
>>
>>
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>>
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>>
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>>
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>>
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>>
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>>
>>>>> Hi Harsh,
>>>>>
>>>>> Thank you very much for your reply, got it!
>>>>>
>>>>> Thanks
>>>>> ac
>>>>>
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>>
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>>
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>>
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>>
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>>
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Harsh J
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>
>
>
> --
> Harsh J

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Try running  $sudo jps or jps as root "#jps"

You will get more info, i.e:

****Jps
***** SecondaryNameNode
***** JobTracker
***** NameNode


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 9:44 AM
To: Harsh J
Cc: ac@hsk.hk; <us...@hadoop.apache.org>
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

I think you are right!



1) $ jps
16152
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
>
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
>
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>>
>> Thanks for your reply.
>>
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>>
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>>
>> Thanks again!
>>
>>
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>>
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>>
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>>
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>>
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>>
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>>
>>>>> Hi Harsh,
>>>>>
>>>>> Thank you very much for your reply, got it!
>>>>>
>>>>> Thanks
>>>>> ac
>>>>>
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>>
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>>
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>>
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>>
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>>
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Harsh J
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>
>
>
> --
> Harsh J

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I think you are right!



1) $ jps
16152 
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
> 
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
> 
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> Thanks for your reply.
>> 
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>> 
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>> 
>> Thanks again!
>> 
>> 
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>> 
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>> 
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>> 
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>> 
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>> 
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>> 
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>> 
>>>>> Hi Harsh,
>>>>> 
>>>>> Thank you very much for your reply, got it!
>>>>> 
>>>>> Thanks
>>>>> ac
>>>>> 
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>> 
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>> 
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>> 
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>> 
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>> 
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I think you are right!



1) $ jps
16152 
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
> 
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
> 
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> Thanks for your reply.
>> 
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>> 
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>> 
>> Thanks again!
>> 
>> 
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>> 
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>> 
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>> 
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>> 
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>> 
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>> 
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>> 
>>>>> Hi Harsh,
>>>>> 
>>>>> Thank you very much for your reply, got it!
>>>>> 
>>>>> Thanks
>>>>> ac
>>>>> 
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>> 
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>> 
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>> 
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>> 
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>> 
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I think you are right!



1) $ jps
16152 
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
> 
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
> 
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> Thanks for your reply.
>> 
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>> 
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>> 
>> Thanks again!
>> 
>> 
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>> 
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>> 
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>> 
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>> 
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>> 
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>> 
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>> 
>>>>> Hi Harsh,
>>>>> 
>>>>> Thank you very much for your reply, got it!
>>>>> 
>>>>> Thanks
>>>>> ac
>>>>> 
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>> 
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>> 
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>> 
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>> 
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>> 
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I think you are right!



1) $ jps
16152 
16500 Jps


2) ps axu | grep 16152
hduser   16152  0.1  1.4 1834900 116760 ?      Sl   21:34   0:06 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16497  0.0  0.0   9384   924 pts/0    S+   22:35   0:00 grep --color=auto 16152


3) ps axu | grep 16117
root     16117  0.0  0.0  17004   904 ?        S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.out -errfile /usr/local/hadoop-1.0.4/libexec/../logs/jsvc.err -pidfile /tmp/hadoop_secure_dn.pid -nodetach -user hduser -cp /usr/local/hadoop-1.0.4/libexec/../conf:/usr/lib/jvm/lib/tools.jar:/usr/local/hadoop-1.0.4/libexec/..:/usr/local/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/usr/local/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar -Xmx1000m -jvm server -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Xmx1024m -Dsecurity.audit.logger=ERROR,DRFAS -Dcom.sun.management.jmxremote -Dhadoop.log.dir=/usr/local/hadoop-1.0.4/libexec/../logs -Dhadoop.log.file=hadoop-hduser-datanode-m147.log -Dhadoop.home.dir=/usr/local/hadoop-1.0.4/libexec/.. -Dhadoop.id.str=hduser -Dhadoop.root.logger=INFO,DRFA -Dhadoop.security.logger=INFO,NullAppender -Djava.library.path=/usr/local/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64 -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter
root     16499  0.0  0.0   9388   920 pts/0    R+   22:35   0:00 grep --color=auto 16117



I started all DNs in secure mode now.
Thanks again!

ac

On 26 Nov 2012, at 10:30 PM, Harsh J wrote:

> Could you also check what 16152 is? The jsvc is a launcher process,
> not the JVM itself.
> 
> As I mentioned, JPS is pretty reliable, just wont' show the name of
> the JVM launched by a custom wrapper - and will show just PID.
> 
> On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> Thanks for your reply.
>> 
>> However, I think 16152 should not be the DN, since
>> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
>> 2) ps axu | grep 16117, I got
>> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>> 
>> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>> 
>> Thanks again!
>> 
>> 
>> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>> 
>>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>> 
>>> "The jps command uses the java launcher to find the class name and
>>> arguments passed to the main method. If the target JVM is started with
>>> a custom launcher, the class name (or JAR file name) and the arguments
>>> to the main method will not be available. In this case, the jps
>>> command will output the string Unknownfor the class name or JAR file
>>> name and for the arguments to the main method."
>>> 
>>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>> 
>>>> A question:
>>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>>> 16152
>>>> 16195 Jps
>>>> 
>>>> However, when I tried to start the secure DN again, I got:
>>>> Warning: $HADOOP_HOME is deprecated.
>>>> datanode running as process 16117. Stop it first.
>>>> 
>>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>> 
>>>>> Hi Harsh,
>>>>> 
>>>>> Thank you very much for your reply, got it!
>>>>> 
>>>>> Thanks
>>>>> ac
>>>>> 
>>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>> 
>>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>> 
>>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I am setting up HDFS security with Kerberos:
>>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>> 
>>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>> 
>>>>>>> OS: Ubuntu 12.04
>>>>>>> Hadoop: 1.0.4
>>>>>>> 
>>>>>>> It seems that it could login successfully but something is missing
>>>>>>> Please help!
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Could you also check what 16152 is? The jsvc is a launcher process,
not the JVM itself.

As I mentioned, JPS is pretty reliable, just wont' show the name of
the JVM launched by a custom wrapper - and will show just PID.

On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> Thanks for your reply.
>
> However, I think 16152 should not be the DN, since
> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
> 2) ps axu | grep 16117, I got
> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>
> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>
> Thanks again!
>
>
> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>
>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>
>> "The jps command uses the java launcher to find the class name and
>> arguments passed to the main method. If the target JVM is started with
>> a custom launcher, the class name (or JAR file name) and the arguments
>> to the main method will not be available. In this case, the jps
>> command will output the string Unknownfor the class name or JAR file
>> name and for the arguments to the main method."
>>
>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> A question:
>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>> 16152
>>> 16195 Jps
>>>
>>> However, when I tried to start the secure DN again, I got:
>>> Warning: $HADOOP_HOME is deprecated.
>>> datanode running as process 16117. Stop it first.
>>>
>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>
>>> Thanks
>>>
>>>
>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>
>>>> Hi Harsh,
>>>>
>>>> Thank you very much for your reply, got it!
>>>>
>>>> Thanks
>>>> ac
>>>>
>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>
>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>
>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I am setting up HDFS security with Kerberos:
>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>
>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>
>>>>>> OS: Ubuntu 12.04
>>>>>> Hadoop: 1.0.4
>>>>>>
>>>>>> It seems that it could login successfully but something is missing
>>>>>> Please help!
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Could you also check what 16152 is? The jsvc is a launcher process,
not the JVM itself.

As I mentioned, JPS is pretty reliable, just wont' show the name of
the JVM launched by a custom wrapper - and will show just PID.

On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> Thanks for your reply.
>
> However, I think 16152 should not be the DN, since
> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
> 2) ps axu | grep 16117, I got
> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>
> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>
> Thanks again!
>
>
> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>
>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>
>> "The jps command uses the java launcher to find the class name and
>> arguments passed to the main method. If the target JVM is started with
>> a custom launcher, the class name (or JAR file name) and the arguments
>> to the main method will not be available. In this case, the jps
>> command will output the string Unknownfor the class name or JAR file
>> name and for the arguments to the main method."
>>
>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> A question:
>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>> 16152
>>> 16195 Jps
>>>
>>> However, when I tried to start the secure DN again, I got:
>>> Warning: $HADOOP_HOME is deprecated.
>>> datanode running as process 16117. Stop it first.
>>>
>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>
>>> Thanks
>>>
>>>
>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>
>>>> Hi Harsh,
>>>>
>>>> Thank you very much for your reply, got it!
>>>>
>>>> Thanks
>>>> ac
>>>>
>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>
>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>
>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I am setting up HDFS security with Kerberos:
>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>
>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>
>>>>>> OS: Ubuntu 12.04
>>>>>> Hadoop: 1.0.4
>>>>>>
>>>>>> It seems that it could login successfully but something is missing
>>>>>> Please help!
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Could you also check what 16152 is? The jsvc is a launcher process,
not the JVM itself.

As I mentioned, JPS is pretty reliable, just wont' show the name of
the JVM launched by a custom wrapper - and will show just PID.

On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> Thanks for your reply.
>
> However, I think 16152 should not be the DN, since
> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
> 2) ps axu | grep 16117, I got
> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>
> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>
> Thanks again!
>
>
> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>
>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>
>> "The jps command uses the java launcher to find the class name and
>> arguments passed to the main method. If the target JVM is started with
>> a custom launcher, the class name (or JAR file name) and the arguments
>> to the main method will not be available. In this case, the jps
>> command will output the string Unknownfor the class name or JAR file
>> name and for the arguments to the main method."
>>
>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> A question:
>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>> 16152
>>> 16195 Jps
>>>
>>> However, when I tried to start the secure DN again, I got:
>>> Warning: $HADOOP_HOME is deprecated.
>>> datanode running as process 16117. Stop it first.
>>>
>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>
>>> Thanks
>>>
>>>
>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>
>>>> Hi Harsh,
>>>>
>>>> Thank you very much for your reply, got it!
>>>>
>>>> Thanks
>>>> ac
>>>>
>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>
>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>
>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I am setting up HDFS security with Kerberos:
>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>
>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>
>>>>>> OS: Ubuntu 12.04
>>>>>> Hadoop: 1.0.4
>>>>>>
>>>>>> It seems that it could login successfully but something is missing
>>>>>> Please help!
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Could you also check what 16152 is? The jsvc is a launcher process,
not the JVM itself.

As I mentioned, JPS is pretty reliable, just wont' show the name of
the JVM launched by a custom wrapper - and will show just PID.

On Mon, Nov 26, 2012 at 7:35 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> Thanks for your reply.
>
> However, I think 16152 should not be the DN, since
> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and
> 2) ps axu | grep 16117, I got
> root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...
>
> These are the two reasons that I think JPS is no longer a tool to check secure DN.
>
> Thanks again!
>
>
> On 26 Nov 2012, at 9:47 PM, Harsh J wrote:
>
>> The 16152 should be the DN JVM I think. This is a jps limitation, as
>> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
>> and jsvc (which secure mode DN uses) is such a custom launcher.
>>
>> "The jps command uses the java launcher to find the class name and
>> arguments passed to the main method. If the target JVM is started with
>> a custom launcher, the class name (or JAR file name) and the arguments
>> to the main method will not be available. In this case, the jps
>> command will output the string Unknownfor the class name or JAR file
>> name and for the arguments to the main method."
>>
>> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> A question:
>>> I started Secure DN then ran JPS as root, I could not find any running DN:
>>> 16152
>>> 16195 Jps
>>>
>>> However, when I tried to start the secure DN again, I got:
>>> Warning: $HADOOP_HOME is deprecated.
>>> datanode running as process 16117. Stop it first.
>>>
>>> Does it mean JPS is no longer a tool to check DN in secure mode?
>>>
>>> Thanks
>>>
>>>
>>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>>>
>>>> Hi Harsh,
>>>>
>>>> Thank you very much for your reply, got it!
>>>>
>>>> Thanks
>>>> ac
>>>>
>>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>>>
>>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>>> needs to be started as root to grab reserved ports), and needs a
>>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>>>
>>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I am setting up HDFS security with Kerberos:
>>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>>>
>>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>>>
>>>>>> OS: Ubuntu 12.04
>>>>>> Hadoop: 1.0.4
>>>>>>
>>>>>> It seems that it could login successfully but something is missing
>>>>>> Please help!
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

Thanks for your reply.

However, I think 16152 should not be the DN, since
1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and 
2) ps axu | grep 16117, I got
root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...

These are the two reasons that I think JPS is no longer a tool to check secure DN.

Thanks again!


On 26 Nov 2012, at 9:47 PM, Harsh J wrote:

> The 16152 should be the DN JVM I think. This is a jps limitation, as
> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
> and jsvc (which secure mode DN uses) is such a custom launcher.
> 
> "The jps command uses the java launcher to find the class name and
> arguments passed to the main method. If the target JVM is started with
> a custom launcher, the class name (or JAR file name) and the arguments
> to the main method will not be available. In this case, the jps
> command will output the string Unknownfor the class name or JAR file
> name and for the arguments to the main method."
> 
> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> A question:
>> I started Secure DN then ran JPS as root, I could not find any running DN:
>> 16152
>> 16195 Jps
>> 
>> However, when I tried to start the secure DN again, I got:
>> Warning: $HADOOP_HOME is deprecated.
>> datanode running as process 16117. Stop it first.
>> 
>> Does it mean JPS is no longer a tool to check DN in secure mode?
>> 
>> Thanks
>> 
>> 
>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>> 
>>> Hi Harsh,
>>> 
>>> Thank you very much for your reply, got it!
>>> 
>>> Thanks
>>> ac
>>> 
>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>> 
>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>> needs to be started as root to grab reserved ports), and needs a
>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>> 
>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>> Hi,
>>>>> 
>>>>> I am setting up HDFS security with Kerberos:
>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>> 
>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>> 
>>>>> OS: Ubuntu 12.04
>>>>> Hadoop: 1.0.4
>>>>> 
>>>>> It seems that it could login successfully but something is missing
>>>>> Please help!
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Harsh J
>>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

Thanks for your reply.

However, I think 16152 should not be the DN, since
1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and 
2) ps axu | grep 16117, I got
root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...

These are the two reasons that I think JPS is no longer a tool to check secure DN.

Thanks again!


On 26 Nov 2012, at 9:47 PM, Harsh J wrote:

> The 16152 should be the DN JVM I think. This is a jps limitation, as
> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
> and jsvc (which secure mode DN uses) is such a custom launcher.
> 
> "The jps command uses the java launcher to find the class name and
> arguments passed to the main method. If the target JVM is started with
> a custom launcher, the class name (or JAR file name) and the arguments
> to the main method will not be available. In this case, the jps
> command will output the string Unknownfor the class name or JAR file
> name and for the arguments to the main method."
> 
> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> A question:
>> I started Secure DN then ran JPS as root, I could not find any running DN:
>> 16152
>> 16195 Jps
>> 
>> However, when I tried to start the secure DN again, I got:
>> Warning: $HADOOP_HOME is deprecated.
>> datanode running as process 16117. Stop it first.
>> 
>> Does it mean JPS is no longer a tool to check DN in secure mode?
>> 
>> Thanks
>> 
>> 
>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>> 
>>> Hi Harsh,
>>> 
>>> Thank you very much for your reply, got it!
>>> 
>>> Thanks
>>> ac
>>> 
>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>> 
>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>> needs to be started as root to grab reserved ports), and needs a
>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>> 
>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>> Hi,
>>>>> 
>>>>> I am setting up HDFS security with Kerberos:
>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>> 
>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>> 
>>>>> OS: Ubuntu 12.04
>>>>> Hadoop: 1.0.4
>>>>> 
>>>>> It seems that it could login successfully but something is missing
>>>>> Please help!
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Harsh J
>>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

Thanks for your reply.

However, I think 16152 should not be the DN, since
1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and 
2) ps axu | grep 16117, I got
root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...

These are the two reasons that I think JPS is no longer a tool to check secure DN.

Thanks again!


On 26 Nov 2012, at 9:47 PM, Harsh J wrote:

> The 16152 should be the DN JVM I think. This is a jps limitation, as
> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
> and jsvc (which secure mode DN uses) is such a custom launcher.
> 
> "The jps command uses the java launcher to find the class name and
> arguments passed to the main method. If the target JVM is started with
> a custom launcher, the class name (or JAR file name) and the arguments
> to the main method will not be available. In this case, the jps
> command will output the string Unknownfor the class name or JAR file
> name and for the arguments to the main method."
> 
> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> A question:
>> I started Secure DN then ran JPS as root, I could not find any running DN:
>> 16152
>> 16195 Jps
>> 
>> However, when I tried to start the secure DN again, I got:
>> Warning: $HADOOP_HOME is deprecated.
>> datanode running as process 16117. Stop it first.
>> 
>> Does it mean JPS is no longer a tool to check DN in secure mode?
>> 
>> Thanks
>> 
>> 
>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>> 
>>> Hi Harsh,
>>> 
>>> Thank you very much for your reply, got it!
>>> 
>>> Thanks
>>> ac
>>> 
>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>> 
>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>> needs to be started as root to grab reserved ports), and needs a
>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>> 
>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>> Hi,
>>>>> 
>>>>> I am setting up HDFS security with Kerberos:
>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>> 
>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>> 
>>>>> OS: Ubuntu 12.04
>>>>> Hadoop: 1.0.4
>>>>> 
>>>>> It seems that it could login successfully but something is missing
>>>>> Please help!
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Harsh J
>>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

Thanks for your reply.

However, I think 16152 should not be the DN, since
1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" says 16117 (i.e. I ran start datanode twice), and 
2) ps axu | grep 16117, I got
root     16117  0.0  0.0  17004   904 pts/2    S    21:34   0:00 jsvc.exec -Dproc_datanode -outfile /usr/local/hadoop-1.0.4/libexec/ ...

These are the two reasons that I think JPS is no longer a tool to check secure DN.

Thanks again!


On 26 Nov 2012, at 9:47 PM, Harsh J wrote:

> The 16152 should be the DN JVM I think. This is a jps limitation, as
> seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
> and jsvc (which secure mode DN uses) is such a custom launcher.
> 
> "The jps command uses the java launcher to find the class name and
> arguments passed to the main method. If the target JVM is started with
> a custom launcher, the class name (or JAR file name) and the arguments
> to the main method will not be available. In this case, the jps
> command will output the string Unknownfor the class name or JAR file
> name and for the arguments to the main method."
> 
> On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> A question:
>> I started Secure DN then ran JPS as root, I could not find any running DN:
>> 16152
>> 16195 Jps
>> 
>> However, when I tried to start the secure DN again, I got:
>> Warning: $HADOOP_HOME is deprecated.
>> datanode running as process 16117. Stop it first.
>> 
>> Does it mean JPS is no longer a tool to check DN in secure mode?
>> 
>> Thanks
>> 
>> 
>> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>> 
>>> Hi Harsh,
>>> 
>>> Thank you very much for your reply, got it!
>>> 
>>> Thanks
>>> ac
>>> 
>>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>> 
>>>> Secure DN needs to be started as root (it runs as proper user, but
>>>> needs to be started as root to grab reserved ports), and needs a
>>>> proper jsvc binary (for your arch/OS) available. Are you using
>>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>> 
>>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>>> Hi,
>>>>> 
>>>>> I am setting up HDFS security with Kerberos:
>>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>> 
>>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>> 
>>>>> OS: Ubuntu 12.04
>>>>> Hadoop: 1.0.4
>>>>> 
>>>>> It seems that it could login successfully but something is missing
>>>>> Please help!
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Harsh J
>>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
The 16152 should be the DN JVM I think. This is a jps limitation, as
seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
and jsvc (which secure mode DN uses) is such a custom launcher.

"The jps command uses the java launcher to find the class name and
arguments passed to the main method. If the target JVM is started with
a custom launcher, the class name (or JAR file name) and the arguments
to the main method will not be available. In this case, the jps
command will output the string Unknownfor the class name or JAR file
name and for the arguments to the main method."

On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> A question:
> I started Secure DN then ran JPS as root, I could not find any running DN:
> 16152
> 16195 Jps
>
> However, when I tried to start the secure DN again, I got:
> Warning: $HADOOP_HOME is deprecated.
> datanode running as process 16117. Stop it first.
>
> Does it mean JPS is no longer a tool to check DN in secure mode?
>
> Thanks
>
>
> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>
>> Hi Harsh,
>>
>> Thank you very much for your reply, got it!
>>
>> Thanks
>> ac
>>
>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>
>>> Secure DN needs to be started as root (it runs as proper user, but
>>> needs to be started as root to grab reserved ports), and needs a
>>> proper jsvc binary (for your arch/OS) available. Are you using
>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>
>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> I am setting up HDFS security with Kerberos:
>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>
>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>
>>>> OS: Ubuntu 12.04
>>>> Hadoop: 1.0.4
>>>>
>>>> It seems that it could login successfully but something is missing
>>>> Please help!
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>



-- 
Harsh J

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
You could also run $sudo service --status-all


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 8:41 AM
To: user@hadoop.apache.org
Cc: ac@hsk.hk
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152
16195 Jps

However, when I tried to start the secure DN again, I got:
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
>
> Thank you very much for your reply, got it!
>
> Thanks
> ac
>
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>>
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>>
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
The 16152 should be the DN JVM I think. This is a jps limitation, as
seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
and jsvc (which secure mode DN uses) is such a custom launcher.

"The jps command uses the java launcher to find the class name and
arguments passed to the main method. If the target JVM is started with
a custom launcher, the class name (or JAR file name) and the arguments
to the main method will not be available. In this case, the jps
command will output the string Unknownfor the class name or JAR file
name and for the arguments to the main method."

On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> A question:
> I started Secure DN then ran JPS as root, I could not find any running DN:
> 16152
> 16195 Jps
>
> However, when I tried to start the secure DN again, I got:
> Warning: $HADOOP_HOME is deprecated.
> datanode running as process 16117. Stop it first.
>
> Does it mean JPS is no longer a tool to check DN in secure mode?
>
> Thanks
>
>
> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>
>> Hi Harsh,
>>
>> Thank you very much for your reply, got it!
>>
>> Thanks
>> ac
>>
>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>
>>> Secure DN needs to be started as root (it runs as proper user, but
>>> needs to be started as root to grab reserved ports), and needs a
>>> proper jsvc binary (for your arch/OS) available. Are you using
>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>
>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> I am setting up HDFS security with Kerberos:
>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>
>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>
>>>> OS: Ubuntu 12.04
>>>> Hadoop: 1.0.4
>>>>
>>>> It seems that it could login successfully but something is missing
>>>> Please help!
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>



-- 
Harsh J

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
You could also run $sudo service --status-all


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 8:41 AM
To: user@hadoop.apache.org
Cc: ac@hsk.hk
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152
16195 Jps

However, when I tried to start the secure DN again, I got:
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
>
> Thank you very much for your reply, got it!
>
> Thanks
> ac
>
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>>
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>>
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
The 16152 should be the DN JVM I think. This is a jps limitation, as
seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
and jsvc (which secure mode DN uses) is such a custom launcher.

"The jps command uses the java launcher to find the class name and
arguments passed to the main method. If the target JVM is started with
a custom launcher, the class name (or JAR file name) and the arguments
to the main method will not be available. In this case, the jps
command will output the string Unknownfor the class name or JAR file
name and for the arguments to the main method."

On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> A question:
> I started Secure DN then ran JPS as root, I could not find any running DN:
> 16152
> 16195 Jps
>
> However, when I tried to start the secure DN again, I got:
> Warning: $HADOOP_HOME is deprecated.
> datanode running as process 16117. Stop it first.
>
> Does it mean JPS is no longer a tool to check DN in secure mode?
>
> Thanks
>
>
> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>
>> Hi Harsh,
>>
>> Thank you very much for your reply, got it!
>>
>> Thanks
>> ac
>>
>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>
>>> Secure DN needs to be started as root (it runs as proper user, but
>>> needs to be started as root to grab reserved ports), and needs a
>>> proper jsvc binary (for your arch/OS) available. Are you using
>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>
>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> I am setting up HDFS security with Kerberos:
>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>
>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>
>>>> OS: Ubuntu 12.04
>>>> Hadoop: 1.0.4
>>>>
>>>> It seems that it could login successfully but something is missing
>>>> Please help!
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
The 16152 should be the DN JVM I think. This is a jps limitation, as
seen at http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
and jsvc (which secure mode DN uses) is such a custom launcher.

"The jps command uses the java launcher to find the class name and
arguments passed to the main method. If the target JVM is started with
a custom launcher, the class name (or JAR file name) and the arguments
to the main method will not be available. In this case, the jps
command will output the string Unknownfor the class name or JAR file
name and for the arguments to the main method."

On Mon, Nov 26, 2012 at 7:11 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> A question:
> I started Secure DN then ran JPS as root, I could not find any running DN:
> 16152
> 16195 Jps
>
> However, when I tried to start the secure DN again, I got:
> Warning: $HADOOP_HOME is deprecated.
> datanode running as process 16117. Stop it first.
>
> Does it mean JPS is no longer a tool to check DN in secure mode?
>
> Thanks
>
>
> On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:
>
>> Hi Harsh,
>>
>> Thank you very much for your reply, got it!
>>
>> Thanks
>> ac
>>
>> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>>
>>> Secure DN needs to be started as root (it runs as proper user, but
>>> needs to be started as root to grab reserved ports), and needs a
>>> proper jsvc binary (for your arch/OS) available. Are you using
>>> tarballs or packages (and if packages, are they from Bigtop)?
>>>
>>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>>> Hi,
>>>>
>>>> I am setting up HDFS security with Kerberos:
>>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>>
>>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>>
>>>> OS: Ubuntu 12.04
>>>> Hadoop: 1.0.4
>>>>
>>>> It seems that it could login successfully but something is missing
>>>> Please help!
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>
>



-- 
Harsh J

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
You could also run $sudo service --status-all


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 8:41 AM
To: user@hadoop.apache.org
Cc: ac@hsk.hk
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152
16195 Jps

However, when I tried to start the secure DN again, I got:
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
>
> Thank you very much for your reply, got it!
>
> Thanks
> ac
>
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>>
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>>
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

RE: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "Kartashov, Andy" <An...@mpac.ca>.
You could also run $sudo service --status-all


-----Original Message-----
From: ac@hsk.hk [mailto:ac@hsk.hk]
Sent: Monday, November 26, 2012 8:41 AM
To: user@hadoop.apache.org
Cc: ac@hsk.hk
Subject: Re: Datanode: "Cannot start secure cluster without privileged resources"

Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152
16195 Jps

However, when I tried to start the secure DN again, I got:
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
>
> Thank you very much for your reply, got it!
>
> Thanks
> ac
>
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
>
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>>
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>>
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>>
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>>
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>>
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152 
16195 Jps

However, when I tried to start the secure DN again, I got: 
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
> 
> Thank you very much for your reply, got it!
> 
> Thanks
> ac
> 
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
> 
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>> 
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>> 
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>> 
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>> 
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>> 
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>> 
>>> Thanks
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> Harsh J
> 


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152 
16195 Jps

However, when I tried to start the secure DN again, I got: 
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
> 
> Thank you very much for your reply, got it!
> 
> Thanks
> ac
> 
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
> 
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>> 
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>> 
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>> 
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>> 
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>> 
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>> 
>>> Thanks
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> Harsh J
> 


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152 
16195 Jps

However, when I tried to start the secure DN again, I got: 
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
> 
> Thank you very much for your reply, got it!
> 
> Thanks
> ac
> 
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
> 
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>> 
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>> 
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>> 
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>> 
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>> 
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>> 
>>> Thanks
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> Harsh J
> 


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

A question:
I started Secure DN then ran JPS as root, I could not find any running DN:
16152 
16195 Jps

However, when I tried to start the secure DN again, I got: 
Warning: $HADOOP_HOME is deprecated.
datanode running as process 16117. Stop it first.

Does it mean JPS is no longer a tool to check DN in secure mode?

Thanks


On 26 Nov 2012, at 9:03 PM, ac@hsk.hk wrote:

> Hi Harsh,
> 
> Thank you very much for your reply, got it!
> 
> Thanks
> ac
> 
> On 26 Nov 2012, at 8:32 PM, Harsh J wrote:
> 
>> Secure DN needs to be started as root (it runs as proper user, but
>> needs to be started as root to grab reserved ports), and needs a
>> proper jsvc binary (for your arch/OS) available. Are you using
>> tarballs or packages (and if packages, are they from Bigtop)?
>> 
>> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>> Hi,
>>> 
>>> I am setting up HDFS security with Kerberos:
>>> When I manually started the first datanode, I got the following messages (the namenode is started):
>>> 
>>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>>> 
>>> OS: Ubuntu 12.04
>>> Hadoop: 1.0.4
>>> 
>>> It seems that it could login successfully but something is missing
>>> Please help!
>>> 
>>> Thanks
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> -- 
>> Harsh J
> 


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi Harsh,

Thank you very much for your reply, got it!

Thanks
ac

On 26 Nov 2012, at 8:32 PM, Harsh J wrote:

> Secure DN needs to be started as root (it runs as proper user, but
> needs to be started as root to grab reserved ports), and needs a
> proper jsvc binary (for your arch/OS) available. Are you using
> tarballs or packages (and if packages, are they from Bigtop)?
> 
> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> I am setting up HDFS security with Kerberos:
>> When I manually started the first datanode, I got the following messages (the namenode is started):
>> 
>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>> 
>> OS: Ubuntu 12.04
>> Hadoop: 1.0.4
>> 
>> It seems that it could login successfully but something is missing
>> Please help!
>> 
>> Thanks
>> 
>> 
>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi Harsh,

Thank you very much for your reply, got it!

Thanks
ac

On 26 Nov 2012, at 8:32 PM, Harsh J wrote:

> Secure DN needs to be started as root (it runs as proper user, but
> needs to be started as root to grab reserved ports), and needs a
> proper jsvc binary (for your arch/OS) available. Are you using
> tarballs or packages (and if packages, are they from Bigtop)?
> 
> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> I am setting up HDFS security with Kerberos:
>> When I manually started the first datanode, I got the following messages (the namenode is started):
>> 
>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>> 
>> OS: Ubuntu 12.04
>> Hadoop: 1.0.4
>> 
>> It seems that it could login successfully but something is missing
>> Please help!
>> 
>> Thanks
>> 
>> 
>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi Harsh,

Thank you very much for your reply, got it!

Thanks
ac

On 26 Nov 2012, at 8:32 PM, Harsh J wrote:

> Secure DN needs to be started as root (it runs as proper user, but
> needs to be started as root to grab reserved ports), and needs a
> proper jsvc binary (for your arch/OS) available. Are you using
> tarballs or packages (and if packages, are they from Bigtop)?
> 
> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> I am setting up HDFS security with Kerberos:
>> When I manually started the first datanode, I got the following messages (the namenode is started):
>> 
>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>> 
>> OS: Ubuntu 12.04
>> Hadoop: 1.0.4
>> 
>> It seems that it could login successfully but something is missing
>> Please help!
>> 
>> Thanks
>> 
>> 
>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi Harsh,

Thank you very much for your reply, got it!

Thanks
ac

On 26 Nov 2012, at 8:32 PM, Harsh J wrote:

> Secure DN needs to be started as root (it runs as proper user, but
> needs to be started as root to grab reserved ports), and needs a
> proper jsvc binary (for your arch/OS) available. Are you using
> tarballs or packages (and if packages, are they from Bigtop)?
> 
> On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> Hi,
>> 
>> I am setting up HDFS security with Kerberos:
>> When I manually started the first datanode, I got the following messages (the namenode is started):
>> 
>> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
>> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>> 
>> OS: Ubuntu 12.04
>> Hadoop: 1.0.4
>> 
>> It seems that it could login successfully but something is missing
>> Please help!
>> 
>> Thanks
>> 
>> 
>> 
>> 
> 
> 
> 
> -- 
> Harsh J


Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Secure DN needs to be started as root (it runs as proper user, but
needs to be started as root to grab reserved ports), and needs a
proper jsvc binary (for your arch/OS) available. Are you using
tarballs or packages (and if packages, are they from Bigtop)?

On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> I am setting up HDFS security with Kerberos:
> When I manually started the first datanode, I got the following messages (the namenode is started):
>
> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>
> OS: Ubuntu 12.04
> Hadoop: 1.0.4
>
> It seems that it could login successfully but something is missing
> Please help!
>
> Thanks
>
>
>
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Secure DN needs to be started as root (it runs as proper user, but
needs to be started as root to grab reserved ports), and needs a
proper jsvc binary (for your arch/OS) available. Are you using
tarballs or packages (and if packages, are they from Bigtop)?

On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> I am setting up HDFS security with Kerberos:
> When I manually started the first datanode, I got the following messages (the namenode is started):
>
> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>
> OS: Ubuntu 12.04
> Hadoop: 1.0.4
>
> It seems that it could login successfully but something is missing
> Please help!
>
> Thanks
>
>
>
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Secure DN needs to be started as root (it runs as proper user, but
needs to be started as root to grab reserved ports), and needs a
proper jsvc binary (for your arch/OS) available. Are you using
tarballs or packages (and if packages, are they from Bigtop)?

On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> I am setting up HDFS security with Kerberos:
> When I manually started the first datanode, I got the following messages (the namenode is started):
>
> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>
> OS: Ubuntu 12.04
> Hadoop: 1.0.4
>
> It seems that it could login successfully but something is missing
> Please help!
>
> Thanks
>
>
>
>



-- 
Harsh J

Re: Datanode: "Cannot start secure cluster without privileged resources"

Posted by Harsh J <ha...@cloudera.com>.
Secure DN needs to be started as root (it runs as proper user, but
needs to be started as root to grab reserved ports), and needs a
proper jsvc binary (for your arch/OS) available. Are you using
tarballs or packages (and if packages, are they from Bigtop)?

On Mon, Nov 26, 2012 at 5:21 PM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>
> I am setting up HDFS security with Kerberos:
> When I manually started the first datanode, I got the following messages (the namenode is started):
>
> 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
> 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.
>
> OS: Ubuntu 12.04
> Hadoop: 1.0.4
>
> It seems that it could login successfully but something is missing
> Please help!
>
> Thanks
>
>
>
>



-- 
Harsh J

Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I am setting up HDFS security with Kerberos: 
When I manually started the first datanode, I got the following messages (the namenode is started):

1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.

OS: Ubuntu 12.04
Hadoop: 1.0.4

It seems that it could login successfully but something is missing
Please help!

Thanks



 

Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I am setting up HDFS security with Kerberos: 
When I manually started the first datanode, I got the following messages (the namenode is started):

1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.

OS: Ubuntu 12.04
Hadoop: 1.0.4

It seems that it could login successfully but something is missing
Please help!

Thanks



 

Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I am setting up HDFS security with Kerberos: 
When I manually started the first datanode, I got the following messages (the namenode is started):

1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.

OS: Ubuntu 12.04
Hadoop: 1.0.4

It seems that it could login successfully but something is missing
Please help!

Thanks



 

Datanode: "Cannot start secure cluster without privileged resources"

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I am setting up HDFS security with Kerberos: 
When I manually started the first datanode, I got the following messages (the namenode is started):

1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user ....
2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources.

OS: Ubuntu 12.04
Hadoop: 1.0.4

It seems that it could login successfully but something is missing
Please help!

Thanks



 

Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

Thanks for the useful link and I will take a look.
Thanks
Arthur

P.S. my name is Arthur Chan so I used ac in rely if I were using mobile device.

 

On 25 Nov 2012, at 11:51 AM, Stack wrote:

> On Sat, Nov 24, 2012 at 10:31 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>> I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4. (64-bit nodes), I finally managed to have the HBase cluster up and running, below is the line in my /etc/hosts for your reference:
>> 
>> #127.0.0.1      localhost
>> 127.0.0.1       localhost.localdomain localhost
>> 
>> According to my set up experience, below are my advices:
>> 1) /etc/hosts: should not comment out 127.0.01 in /etc/hosts
>> 2) Zookeeper: do not sync its "data" and "datalog" folders  to other Zookeeper servers in your deployment
>> 3) check your start procedures:
>>        - check your firewall policies, make sure each server can use the required TCP/IP ports, especially port 2181 in your case
>>        - start Zookeeper first, need to make sure all other servers can access Zookeeper servers, use "/bin/zkCli.sh -server XXXX" or "echo ruok | nc XXXX 2181" to test all Zookeepers from each HBASE server.
>>        - start Hadoop, use JPS to make sure Namenode, SecondaryNameNode, Datanodes up and running, check LOG files of each servers
>>        - start MapReduce if you need it
>>        - start HBase, use JPS to check HBase's HMaster and HRegionServers, then wait a while use JPS to check HMaster and HRegionServers again, if them all HBASE servers gone but HADOOP still up and running,  most likely it would be HBASE configure issue in hbase-site.xml related to ZooKeeper settings or ZooKeeper configure/data issues.
>> 
>> 
>> Hope these help and good luck.
>> ac
>> 
> 
> Thanks ac for the clean instructions.  We have an ubuntu callout here
> on localhost in /etc/hosts:
> http://hbase.apache.org/book.html#basic.prerequisites  What else would
> you suggest we add to the reference guide?
> 
> Thanks,
> St.Ack
> P.S. I used to have a friend named AC but in his case it stood for
> "Anti-Christ".


Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by Stack <st...@duboce.net>.
On Sat, Nov 24, 2012 at 10:31 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
> I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4. (64-bit nodes), I finally managed to have the HBase cluster up and running, below is the line in my /etc/hosts for your reference:
>
> #127.0.0.1      localhost
> 127.0.0.1       localhost.localdomain localhost
>
> According to my set up experience, below are my advices:
> 1) /etc/hosts: should not comment out 127.0.01 in /etc/hosts
> 2) Zookeeper: do not sync its "data" and "datalog" folders  to other Zookeeper servers in your deployment
> 3) check your start procedures:
>         - check your firewall policies, make sure each server can use the required TCP/IP ports, especially port 2181 in your case
>         - start Zookeeper first, need to make sure all other servers can access Zookeeper servers, use "/bin/zkCli.sh -server XXXX" or "echo ruok | nc XXXX 2181" to test all Zookeepers from each HBASE server.
>         - start Hadoop, use JPS to make sure Namenode, SecondaryNameNode, Datanodes up and running, check LOG files of each servers
>         - start MapReduce if you need it
>         - start HBase, use JPS to check HBase's HMaster and HRegionServers, then wait a while use JPS to check HMaster and HRegionServers again, if them all HBASE servers gone but HADOOP still up and running,  most likely it would be HBASE configure issue in hbase-site.xml related to ZooKeeper settings or ZooKeeper configure/data issues.
>
>
> Hope these help and good luck.
> ac
>

Thanks ac for the clean instructions.  We have an ubuntu callout here
on localhost in /etc/hosts:
http://hbase.apache.org/book.html#basic.prerequisites  What else would
you suggest we add to the reference guide?

Thanks,
St.Ack
P.S. I used to have a friend named AC but in his case it stood for
"Anti-Christ".

Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Hi,

I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4. (64-bit nodes), I finally managed to have the HBase cluster up and running, below is the line in my /etc/hosts for your reference:

#127.0.0.1      localhost
127.0.0.1       localhost.localdomain localhost

According to my set up experience, below are my advices:
1) /etc/hosts: should not comment out 127.0.01 in /etc/hosts
2) Zookeeper: do not sync its "data" and "datalog" folders  to other Zookeeper servers in your deployment
3) check your start procedures: 
 	- check your firewall policies, make sure each server can use the required TCP/IP ports, especially port 2181 in your case
     	- start Zookeeper first, need to make sure all other servers can access Zookeeper servers, use "/bin/zkCli.sh -server XXXX" or "echo ruok | nc XXXX 2181" to test all Zookeepers from each HBASE server.  
	- start Hadoop, use JPS to make sure Namenode, SecondaryNameNode, Datanodes up and running, check LOG files of each servers
	- start MapReduce if you need it
	- start HBase, use JPS to check HBase's HMaster and HRegionServers, then wait a while use JPS to check HMaster and HRegionServers again, if them all HBASE servers gone but HADOOP still up and running,  most likely it would be HBASE configure issue in hbase-site.xml related to ZooKeeper settings or ZooKeeper configure/data issues.


Hope these help and good luck.
ac



Originally I have 7 nodes, 5 of them are 64-bit and 2 of them are 32-bit, all 64-bit servers are connected to network A and the two 


On 24 Nov 2012, at 10:51 AM, Michael Segel wrote:

> Hi Alan, 
> 
> Yes. I am suggesting that. 
> 
> Your 127.0.0.1 subnet should be localhost  only and then your other entries. 
> It looks like 10.64.155.52 is the external interface (eth0) for the machine hadoop1.
> 
> Adding it to 127.0.0.1 confuses HBase since it will use the first entry it sees. (Going from memory) So it will always look to local hosts.
> 
> I think that should fix your problem. 
> 
> HTH
> 
> -Mike
> 
> On Nov 23, 2012, at 10:11 AM, "Ratner, Alan S (IS)" <Al...@ngc.com> wrote:
> 
>> Mike,
>> 
>> 
>> 
>>           Yes I do.
>> 
>> 
>> 
>> With this /etc/hosts HBase works but NX and VNC do not.
>> 
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost
>> 
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>> 
>> ...
>> 
>> 
>> 
>> With this /etc/hosts NX and VNC work but HBase does not.
>> 
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>> 
>> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver
>> 
>> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
>> 
>> ...
>> 
>> 
>> 
>> I assume from your question that if I should try replacing
>> 
>> 127.0.0.1 hadoop1 localhost.localdomain localhost
>> 
>> with simply:
>> 
>> 127.0.0.1 localhost
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Alan
>> 
>> 
>> 
>> 
>> 
>> -----Original Message-----
>> From: Michael Segel [mailto:michael_segel@hotmail.com]
>> Sent: Wednesday, November 21, 2012 7:40 PM
>> To: user@hbase.apache.org
>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
>> 
>> 
>> 
>> Hi,
>> 
>> 
>> 
>> Quick question...
>> 
>> 
>> 
>> DO you have 127.0.0.1 set to anything other than localhost?
>> 
>> 
>> 
>> If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.
>> 
>> 
>> 
>> If you have Hadoop up and working, then you should be able to stand up HBase on top of that.
>> 
>> 
>> 
>> Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.
>> 
>> What does your /etc/ hosts file look like?
>> 
>> 
>> 
>> How many machines in your cluster?
>> 
>> 
>> 
>> Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...
>> 
>> 
>> 
>> If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.
>> 
>> 
>> 
>> HTH
>> 
>> 
>> 
>> -Mike
>> 
>> 
>> 
>> On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Al...@ngc.com>> wrote:
>> 
>> 
>> 
>>> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.
>> 
>>> 
>> 
>>> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.
>> 
>>> 
>> 
>>> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
>> 
>>> a) revert to an old version of HBase
>> 
>>> b) switch to Accumulo, or
>> 
>>> c) switch to Cassandra.
>> 
>>> 
>> 
>>> Alan
>> 
>>> 
>> 
>>> 
>> 
>>> -----Original Message-----
>> 
>>> From: Mohammad Tariq [mailto:dontariq@gmail.com]
>> 
>>> Sent: Wednesday, November 21, 2012 3:11 PM
>> 
>>> To: user@hbase.apache.org<ma...@hbase.apache.org>
>> 
>>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
>> 
>>> 
>> 
>>> Hello Alan,
>> 
>>> 
>> 
>>>  It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
>> 
>>> have proper DNS resolution as it plays an important role in proper Hbase
>> 
>>> functioning. Also add the "hbase.zookeeper.property.clientPort" property in
>> 
>>> your hbase-site.xml file and see if it works for you.
>> 
>>> 
>> 
>>> Regards,
>> 
>>>  Mohammad Tariq
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>>wrote:
>> 
>>> 
>> 
>>>> I'd appreciate any suggestions as to how to get HBase up and running.
>> 
>>>> Right now it dies after a few seconds on all servers.  I am using Hadoop
>> 
>>>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
>> 
>>>> 
>> 
>>>> History: Yesterday I managed to get HBase 0.94.2 working but only after
>> 
>>>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
>> 
>>>> clocks).  All was fine until this morning when I realized I could not
>> 
>>>> initiate remote log-ins to my servers (using VNC or NX) until I restored
>> 
>>>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
>> 
>>>> non-working HBase.
>> 
>>>> 
>> 
>>>> With HBase managing ZK I see the following in the HBase Master and ZK
>> 
>>>> logs, respectively:
>> 
>>>> 2012-11-21 13:40:22,236 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 
>> 
>>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 
>> 
>>>> At roughly the same time (clocks not perfectly synchronized) I see this in
>> 
>>>> a Regionserver log:
>> 
>>>> 2012-11-21 13:40:57,727 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> ...
>> 
>>>> 2012-11-21 13:40:57,848 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 
>> 
>>>> Logs and configuration follows.
>> 
>>>> 
>> 
>>>> Then I tried managing ZK myself and HBase then fails for seemingly
>> 
>>>> different reasons.
>> 
>>>> 2012-11-21 14:46:37,320 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> 
>>>> is not a retry
>> 
>>>> 
>> 
>>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> 
>>>> Unhandled exception. Starting shutdown.
>> 
>>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> 
>>>> connection exception: java.net.ConnectException: Connection refused
>> 
>>>> 
>> 
>>>> Both HMaster error logs (self-managed and me-managed ZK) mention the
>> 
>>>> 127.0.0.1 IP address instead of referring to the server by its name
>> 
>>>> (hadoop1) or its true IP address or simply as localhost.
>> 
>>>> 
>> 
>>>> So, start-hbase.sh works OK (HB managing ZK):
>> 
>>>> ngc@hadoop1:~/hbase-0.94.2$<mailto:ngc@hadoop1:~/hbase-0.94.2$> bin/start-hbase.sh
>> 
>>>> hadoop1: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
>> 
>>>> hadoop2: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
>> 
>>>> hadoop3: starting zookeeper, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
>> 
>>>> starting master, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
>> 
>>>> hadoop2: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
>> 
>>>> hadoop6: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
>> 
>>>> hadoop3: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
>> 
>>>> hadoop5: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
>> 
>>>> hadoop4: starting regionserver, logging to
>> 
>>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
>> 
>>>> 
>> 
>>>> I have in hbase-site.xml:
>> 
>>>> <property>
>> 
>>>>  <name>hbase.cluster.distributed</name>
>> 
>>>>  <value>true</value>
>> 
>>>> </property>
>> 
>>>>    <property>
>> 
>>>>          <name>hbase.master</name>
>> 
>>>>          <value>hadoop1:60000</value>
>> 
>>>>      </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.rootdir</name>
>> 
>>>>  <value>hdfs://hadoop1:9000/hbase</value>
>> 
>>>> </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.zookeeper.property.dataDir</name>
>> 
>>>>  <value>/tmp/zookeeper_data</value>
>> 
>>>> </property>
>> 
>>>> <property>
>> 
>>>>  <name>hbase.zookeeper.quorum</name>
>> 
>>>>  <value>hadoop1,hadoop2,hadoop3</value>
>> 
>>>> </property>
>> 
>>>> 
>> 
>>>> I have in hbase-env.sh:
>> 
>>>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
>> 
>>>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
>> 
>>>> export HBASE_HEAPSIZE=2000
>> 
>>>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
>> 
>>>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
>> 
>>>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
>> 
>>>> export HBASE_MANAGES_ZK=true
>> 
>>>> 
>> 
>>>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
>> 
>>>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> 
>>>> serverside HConnection retries=100
>> 
>>>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop1
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.2.0-24-generic
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=master:60000
>> 
>>>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:22,099 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 742@hadoop1
>> 
>>>> 2012-11-21 13:40:22,106 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,106 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,236 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 2000ms before retry #1...
>> 
>>>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:22,411 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,411 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:22,747 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,747 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:22,967 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:22,967 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:24,176 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:24,176 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:24,277 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 4000ms before retry #2...
>> 
>>>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:24,767 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:24,767 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:25,757 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:25,757 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:26,597 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:26,597 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:27,775 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:27,775 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:28,318 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:28,318 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:28,419 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 8000ms before retry #3...
>> 
>>>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:29,106 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:29,106 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:30,039 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:30,039 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:31,283 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:31,283 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:32,143 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:32,143 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:32,480 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:32,480 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:33,295 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:33,295 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:34,962 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:34,962 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:35,661 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:35,661 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:36,523 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:36,523 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:36,625 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>> 2012-11-21 13:40:36,625 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> 
>>>> failed after 3 retries
>> 
>>>> 2012-11-21 13:40:36,626 ERROR
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
>> 
>>>> java.lang.RuntimeException: Failed construction of Master: class
>> 
>>>> org.apache.hadoop.hbase.master.HMaster
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
>> 
>>>>    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
>> 
>>>>    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
>> 
>>>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
>> 
>>>>    at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
>> 
>>>>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> 
>>>> Method)
>> 
>>>>    at
>> 
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 
>>>>    at
>> 
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 
>>>>    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
>> 
>>>>    ... 5 more
>> 
>>>> 
>> 
>>>> 
>> 
>>>> From server hadoop2 (running regionserver, ZK, DN, TT)
>> 
>>>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 193105
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 193105
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 13:40:57,172 INFO
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
>> 
>>>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
>> 
>>>> 2012-11-21 13:40:57,172 INFO
>> 
>>>> org.apache.hadoop.hbase.util.ServerCommandLine:
>> 
>>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
>> 
>>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>> 
>>>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
>> 
>>>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
>> 
>>>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
>> 
>>>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
>> 
>>>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
>> 
>>>> -Dhbase.root.logger=INFO,DRFA,
>> 
>>>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
>> 
>>>> -Dhbase.security.logger=INFO,DRFAS]
>> 
>>>> 2012-11-21 13:40:57,222 DEBUG
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
>> 
>>>> HConnection retries=100
>> 
>>>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-1
>> 
>>>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HRegionServer, port=60020
>> 
>>>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
>> 
>>>> Allocating LruBlockCache with maximum size 493.8m
>> 
>>>> 2012-11-21 13:40:57,699 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
>> 
>>>> thread: Shutdownhook:regionserver60020
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop2.aj.c2fse.northgrum.com
>> 
>>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.0.0-12-generic
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=regionserver:60020
>> 
>>>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 13:40:57,719 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 12835@hadoop2
>> 
>>>> 2012-11-21 13:40:57,727 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:57,727 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:57,848 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 2000ms before retry #1...
>> 
>>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.53:2181
>> 
>>>> 2012-11-21 13:40:58,283 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:58,283 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /127.0.0.1:2181
>> 
>>>> 2012-11-21 13:40:58,726 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:58,726 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.52:2181
>> 
>>>> 2012-11-21 13:40:59,368 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:40:59,368 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:00,660 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:00,660 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:00,761 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 4000ms before retry #2...
>> 
>>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:01,422 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:01,422 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:02,370 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:02,370 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:02,627 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:02,627 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:03,968 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:03,969 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:04,733 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:04,733 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:04,835 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> 
>>>> Sleeping 8000ms before retry #3...
>> 
>>>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:05,741 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:05,741 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:06,192 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:06,192 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:07,313 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:07,313 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:08,273 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:08,273 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:09,090 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:09,090 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:09,711 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:09,711 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:11,121 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:11,121 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:11,600 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:11,600 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server hadoop1/127.0.0.1:2181
>> 
>>>> 2012-11-21 13:41:12,320 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:12,320 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 
>>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 
>>>> 2012-11-21 13:41:12,861 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:12,861 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:12,962 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> 
>>>> ZooKeeper exception:
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>> 2012-11-21 13:41:12,962 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> 
>>>> failed after 3 retries
>> 
>>>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
>> 
>>>> regionserver:60020 Unable to set watcher on znode /hbase/master
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,966 ERROR
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
>> 
>>>> Received unexpected KeeperException, re-throwing exception
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,966 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
>> 
>>>> during initialization, aborting
>> 
>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> 
>>>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>> 
>>>>    at
>> 
>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>> 
>>>>    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:12,969 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> 
>>>> loaded coprocessors are: []
>> 
>>>> 2012-11-21 13:41:12,969 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
>> 
>>>> exception during initialization, aborting
>> 
>>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 
>>>> 2012-11-21 13:41:14,834 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:14,834 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 
>>>> 2012-11-21 13:41:15,335 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 13:41:15,335 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> 
>>>> read additional data from server sessionid 0x0, likely server has closed
>> 
>>>> socket, closing socket connection and attempting reconnect
>> 
>>>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
>> 
>>>> server on 60020
>> 
>>>> 2012-11-21 13:41:15,975 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> 
>>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
>> 
>>>> failed.  Hence aborting RS.
>> 
>>>> java.io.IOException: Received the shutdown message while waiting.
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>> 
>>>>    at java.lang.Thread.run(Thread.java:662)
>> 
>>>> 2012-11-21 13:41:15,976 FATAL
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> 
>>>> loaded coprocessors are: []
>> 
>>>> 2012-11-21 13:41:15,976 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
>> 
>>>> of RS failed.  Hence aborting RS.
>> 
>>>> 2012-11-21 13:41:15,978 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
>> 
>>>> MXBean
>> 
>>>> 2012-11-21 13:41:15,980 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
>> 
>>>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
>> 
>>>> 2012-11-21 13:41:15,980 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
>> 
>>>> 2012-11-21 13:41:15,981 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
>> 
>>>> hook thread.
>> 
>>>> 2012-11-21 13:41:15,981 INFO
>> 
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
>> 
>>>> 
>> 
>>>> Finally, in the zookeeper log from hadoop1 I have:
>> 
>>>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 13:40:20,279 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
>> 
>>>> quorums
>> 
>>>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
>> 
>>>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
>> 
>>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
>> 
>>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
>> 
>>>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> 
>>>> name=log4j:logger=org.apache.hadoop.hbase
>> 
>>>> 2012-11-21 13:40:20,336 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
>> 
>>>> 2012-11-21 13:40:20,356 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
>> 
>>>> 0.0.0.0/0.0.0.0:2181
>> 
>>>> 2012-11-21 13:40:20,378 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
>> 
>>>> 180000
>> 
>>>> 2012-11-21 13:40:20,379 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
>> 
>>>> 2012-11-21 13:40:20,395 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
>> 
>>>> Creating with a reasonable default of 0. This should only happen when you
>> 
>>>> are upgrading your installation
>> 
>>>> 2012-11-21 13:40:20,442 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
>> 
>>>> 0.0.0.0/0.0.0.0:3888
>> 
>>>> 2012-11-21 13:40:20,456 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
>> 
>>>> 2012-11-21 13:40:20,458 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
>> 
>>>> =  0, proposed zxid=0x0
>> 
>>>> 2012-11-21 13:40:20,460 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
>> 
>>>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
>> 
>>>> (n.peerEPoch), LOOKING (my state)
>> 
>>>> 2012-11-21 13:40:20,464 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:20,465 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:20,663 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 400
>> 
>>>> 2012-11-21 13:40:21,064 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:21,065 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:21,065 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 800
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:21,866 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 1600
>> 
>>>> 2012-11-21 13:40:22,113 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55216
>> 
>>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /127.0.0.1:55216 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:22,373 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.52:60339
>> 
>>>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.52:60339 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:22,968 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.52:60342
>> 
>>>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.52:60342 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:23,187 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55221
>> 
>>>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /127.0.0.1:55221 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (2, 0)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> 
>>>> identifier, so dropping the connection: (1, 0)
>> 
>>>> 2012-11-21 13:40:23,467 INFO
>> 
>>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> 
>>>> out: 3200
>> 
>>>> 2012-11-21 13:40:24,116 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /10.64.155.54:35599
>> 
>>>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Exception causing close of session 0x0 due to java.io.IOException:
>> 
>>>> ZooKeeperServer not running
>> 
>>>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> 
>>>> Closed socket connection for client /10.64.155.54:35599 (no session
>> 
>>>> established for client)
>> 
>>>> 2012-11-21 13:40:24,176 INFO
>> 
>>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> 
>>>> connection from /127.0.0.1:55225
>> 
>>>> ...
>> 
>>>> 
>> 
>>>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
>> 
>>>> in /etc/hosts):
>> 
>>>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
>> 
>>>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
>> 
>>>> core file size          (blocks, -c) 0
>> 
>>>> data seg size           (kbytes, -d) unlimited
>> 
>>>> scheduling priority             (-e) 0
>> 
>>>> file size               (blocks, -f) unlimited
>> 
>>>> pending signals                 (-i) 386178
>> 
>>>> max locked memory       (kbytes, -l) 64
>> 
>>>> max memory size         (kbytes, -m) unlimited
>> 
>>>> open files                      (-n) 1024
>> 
>>>> pipe size            (512 bytes, -p) 8
>> 
>>>> POSIX message queues     (bytes, -q) 819200
>> 
>>>> real-time priority              (-r) 0
>> 
>>>> stack size              (kbytes, -s) 8192
>> 
>>>> cpu time               (seconds, -t) unlimited
>> 
>>>> max user processes              (-u) 386178
>> 
>>>> virtual memory          (kbytes, -v) unlimited
>> 
>>>> file locks                      (-x) unlimited
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> HBase 0.94.2
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 
>>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> 
>>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 
>>>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> 
>>>> serverside HConnection retries=100
>> 
>>>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> 
>>>> Thread-2
>> 
>>>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> 
>>>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:host.name=hadoop1
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.version=1.6.0_25
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.vendor=Sun Microsystems Inc.
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.io.tmpdir=/tmp
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:java.compiler=<NA>
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.name=Linux
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.arch=amd64
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:os.version=3.2.0-24-generic
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.name=ngc
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.home=/home/ngc
>> 
>>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> 
>>>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 
>>>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> 
>>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> 
>>>> sessionTimeout=180000 watcher=master:60000
>> 
>>>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
>> 
>>>> socket connection to server /10.64.155.54:2181
>> 
>>>> 2012-11-21 14:46:37,087 INFO
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> 
>>>> this process is 12692@hadoop1
>> 
>>>> 2012-11-21 14:46:37,095 WARN
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> 
>>>> java.lang.SecurityException: Unable to locate a login configuration
>> 
>>>> occurred when trying to find JAAS configuration.
>> 
>>>> 2012-11-21 14:46:37,095 INFO
>> 
>>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> 
>>>> SASL-authenticate because the default JAAS configuration section 'Client'
>> 
>>>> could not be found. If you are not using SASL, you may ignore this. On the
>> 
>>>> other hand, if you expected SASL to work, please fix your JAAS
>> 
>>>> configuration.
>> 
>>>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
>> 
>>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> 
>>>> initiating session
>> 
>>>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
>> 
>>>> establishment complete on server
>> 
>>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
>> 
>>>> 0x33b247f4c380000, negotiated timeout = 40000
>> 
>>>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> Responder: starting
>> 
>>>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> listener on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 0 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 1 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 2 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 3 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 4 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 5 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 6 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 7 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 8 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> 
>>>> handler 9 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 0 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 1 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> 
>>>> Server handler 2 on 60000: starting
>> 
>>>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> 
>>>> Initializing JVM Metrics with processName=Master,
>> 
>>>> sessionId=hadoop1,60000,1353527196915
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: revision
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsUser
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsDate
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsUrl
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: date
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsRevision
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: user
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: hdfsVersion
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: url
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> 
>>>> MetricsString added: version
>> 
>>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 
>>>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 
>>>> 2012-11-21 14:46:37,272 INFO
>> 
>>>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
>> 
>>>> 2012-11-21 14:46:37,299 INFO
>> 
>>>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
>> 
>>>> directory
>> 
>>>> 2012-11-21 14:46:37,320 WARN
>> 
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> 
>>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> 
>>>> is not a retry
>> 
>>>> 2012-11-21 14:46:37,321 INFO
>> 
>>>> org.apache.hadoop.hbase.master.ActiveMasterManager:
>> 
>>>> Master=hadoop1,60000,1353527196915
>> 
>>>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
>> 
>>>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
>> 
>>>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
>> 
>>>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
>> 
>>>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
>> 
>>>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
>> 
>>>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
>> 
>>>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
>> 
>>>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
>> 
>>>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> 
>>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
>> 
>>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> 
>>>> Unhandled exception. Starting shutdown.
>> 
>>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> 
>>>> connection exception: java.net.ConnectException: Connection refused
>> 
>>>>    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
>> 
>>>>    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> 
>>>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> 
>>>>    at $Proxy10.getProtocolVersion(Unknown Source)
>> 
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>> 
>>>>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>> 
>>>>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>> 
>>>>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>> 
>>>>    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> 
>>>>    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> 
>>>>    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
>> 
>>>>    at
>> 
>>>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
>> 
>>>>  ...
>> 
>>>> 
>> 
>>>> [Message clipped]
>> 
>>> 
>> 
>> 
> 


Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by "Ratner, Alan S (IS)" <Al...@ngc.com>.
Thanks Mike - that solved the problem.
Alan Ratner, Northrop Grumman Information Systems

----- Original Message -----
From: Michael Segel [mailto:michael_segel@hotmail.com]
Sent: Friday, November 23, 2012 08:51 PM
To: user@hbase.apache.org <us...@hbase.apache.org>
Subject: Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Hi Alan, 

Yes. I am suggesting that. 

Your 127.0.0.1 subnet should be localhost  only and then your other entries. 
It looks like 10.64.155.52 is the external interface (eth0) for the machine hadoop1.

Adding it to 127.0.0.1 confuses HBase since it will use the first entry it sees. (Going from memory) So it will always look to local hosts.

I think that should fix your problem. 

HTH

-Mike

On Nov 23, 2012, at 10:11 AM, "Ratner, Alan S (IS)" <Al...@ngc.com> wrote:

> Mike,
> 
> 
> 
>            Yes I do.
> 
> 
> 
> With this /etc/hosts HBase works but NX and VNC do not.
> 
> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost
> 
> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
> 
> ...
> 
> 
> 
> With this /etc/hosts NX and VNC work but HBase does not.
> 
> 127.0.0.1 hadoop1 localhost.localdomain localhost
> 
> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver
> 
> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
> 
> ...
> 
> 
> 
> I assume from your question that if I should try replacing
> 
> 127.0.0.1 hadoop1 localhost.localdomain localhost
> 
> with simply:
> 
> 127.0.0.1 localhost
> 
> 
> 
> 
> 
> 
> 
> Alan
> 
> 
> 
> 
> 
> -----Original Message-----
> From: Michael Segel [mailto:michael_segel@hotmail.com]
> Sent: Wednesday, November 21, 2012 7:40 PM
> To: user@hbase.apache.org
> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
> 
> 
> 
> Hi,
> 
> 
> 
> Quick question...
> 
> 
> 
> DO you have 127.0.0.1 set to anything other than localhost?
> 
> 
> 
> If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.
> 
> 
> 
> If you have Hadoop up and working, then you should be able to stand up HBase on top of that.
> 
> 
> 
> Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.
> 
> What does your /etc/ hosts file look like?
> 
> 
> 
> How many machines in your cluster?
> 
> 
> 
> Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...
> 
> 
> 
> If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.
> 
> 
> 
> HTH
> 
> 
> 
> -Mike
> 
> 
> 
> On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Al...@ngc.com>> wrote:
> 
> 
> 
>> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.
> 
>> 
> 
>> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.
> 
>> 
> 
>> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
> 
>> a) revert to an old version of HBase
> 
>> b) switch to Accumulo, or
> 
>> c) switch to Cassandra.
> 
>> 
> 
>> Alan
> 
>> 
> 
>> 
> 
>> -----Original Message-----
> 
>> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> 
>> Sent: Wednesday, November 21, 2012 3:11 PM
> 
>> To: user@hbase.apache.org<ma...@hbase.apache.org>
> 
>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
> 
>> 
> 
>> Hello Alan,
> 
>> 
> 
>>   It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
> 
>> have proper DNS resolution as it plays an important role in proper Hbase
> 
>> functioning. Also add the "hbase.zookeeper.property.clientPort" property in
> 
>> your hbase-site.xml file and see if it works for you.
> 
>> 
> 
>> Regards,
> 
>>   Mohammad Tariq
> 
>> 
> 
>> 
> 
>> 
> 
>> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>>wrote:
> 
>> 
> 
>>> I'd appreciate any suggestions as to how to get HBase up and running.
> 
>>> Right now it dies after a few seconds on all servers.  I am using Hadoop
> 
>>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
> 
>>> 
> 
>>> History: Yesterday I managed to get HBase 0.94.2 working but only after
> 
>>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
> 
>>> clocks).  All was fine until this morning when I realized I could not
> 
>>> initiate remote log-ins to my servers (using VNC or NX) until I restored
> 
>>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
> 
>>> non-working HBase.
> 
>>> 
> 
>>> With HBase managing ZK I see the following in the HBase Master and ZK
> 
>>> logs, respectively:
> 
>>> 2012-11-21 13:40:22,236 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 
> 
>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 
> 
>>> At roughly the same time (clocks not perfectly synchronized) I see this in
> 
>>> a Regionserver log:
> 
>>> 2012-11-21 13:40:57,727 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> ...
> 
>>> 2012-11-21 13:40:57,848 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 
> 
>>> Logs and configuration follows.
> 
>>> 
> 
>>> Then I tried managing ZK myself and HBase then fails for seemingly
> 
>>> different reasons.
> 
>>> 2012-11-21 14:46:37,320 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> 
>>> is not a retry
> 
>>> 
> 
>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> 
>>> Unhandled exception. Starting shutdown.
> 
>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> 
>>> connection exception: java.net.ConnectException: Connection refused
> 
>>> 
> 
>>> Both HMaster error logs (self-managed and me-managed ZK) mention the
> 
>>> 127.0.0.1 IP address instead of referring to the server by its name
> 
>>> (hadoop1) or its true IP address or simply as localhost.
> 
>>> 
> 
>>> So, start-hbase.sh works OK (HB managing ZK):
> 
>>> ngc@hadoop1:~/hbase-0.94.2$<mailto:ngc@hadoop1:~/hbase-0.94.2$> bin/start-hbase.sh
> 
>>> hadoop1: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
> 
>>> hadoop2: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
> 
>>> hadoop3: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
> 
>>> starting master, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
> 
>>> hadoop2: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
> 
>>> hadoop6: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
> 
>>> hadoop3: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
> 
>>> hadoop5: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
> 
>>> hadoop4: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
> 
>>> 
> 
>>> I have in hbase-site.xml:
> 
>>> <property>
> 
>>>   <name>hbase.cluster.distributed</name>
> 
>>>   <value>true</value>
> 
>>> </property>
> 
>>>     <property>
> 
>>>           <name>hbase.master</name>
> 
>>>           <value>hadoop1:60000</value>
> 
>>>       </property>
> 
>>> <property>
> 
>>>   <name>hbase.rootdir</name>
> 
>>>   <value>hdfs://hadoop1:9000/hbase</value>
> 
>>> </property>
> 
>>> <property>
> 
>>>   <name>hbase.zookeeper.property.dataDir</name>
> 
>>>   <value>/tmp/zookeeper_data</value>
> 
>>> </property>
> 
>>> <property>
> 
>>>   <name>hbase.zookeeper.quorum</name>
> 
>>>   <value>hadoop1,hadoop2,hadoop3</value>
> 
>>> </property>
> 
>>> 
> 
>>> I have in hbase-env.sh:
> 
>>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
> 
>>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
> 
>>> export HBASE_HEAPSIZE=2000
> 
>>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
> 
>>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
> 
>>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
> 
>>> export HBASE_MANAGES_ZK=true
> 
>>> 
> 
>>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
> 
>>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> 
>>> serverside HConnection retries=100
> 
>>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HMaster, port=60000
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop1
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.2.0-24-generic
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=master:60000
> 
>>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /127.0.0.1:2181
> 
>>> 2012-11-21 13:40:22,099 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 742@hadoop1
> 
>>> 2012-11-21 13:40:22,106 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,106 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,236 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 2000ms before retry #1...
> 
>>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.53:2181
> 
>>> 2012-11-21 13:40:22,411 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,411 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 13:40:22,747 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,747 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.52:2181
> 
>>> 2012-11-21 13:40:22,967 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,967 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:24,176 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:24,176 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:24,277 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 4000ms before retry #2...
> 
>>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:24,767 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:24,767 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:25,757 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:25,757 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:26,597 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:26,597 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:27,775 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:27,775 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:28,318 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:28,318 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:28,419 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 8000ms before retry #3...
> 
>>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:29,106 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:29,106 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:30,039 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:30,039 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:31,283 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:31,283 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:32,143 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:32,143 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:32,480 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:32,480 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:33,295 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:33,295 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:34,962 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:34,962 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:35,661 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:35,661 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:36,523 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:36,523 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:36,625 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:36,625 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> 
>>> failed after 3 retries
> 
>>> 2012-11-21 13:40:36,626 ERROR
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
> 
>>> java.lang.RuntimeException: Failed construction of Master: class
> 
>>> org.apache.hadoop.hbase.master.HMaster
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
> 
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
> 
>>>     at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
> 
>>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
> 
>>>     at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
> 
>>>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> 
>>> Method)
> 
>>>     at
> 
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 
>>>     at
> 
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 
>>>     at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
> 
>>>     ... 5 more
> 
>>> 
> 
>>> 
> 
>>> From server hadoop2 (running regionserver, ZK, DN, TT)
> 
>>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 193105
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 193105
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 13:40:57,172 INFO
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
> 
>>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
> 
>>> 2012-11-21 13:40:57,172 INFO
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine:
> 
>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
> 
>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> 
>>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
> 
>>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
> 
>>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
> 
>>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
> 
>>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
> 
>>> -Dhbase.root.logger=INFO,DRFA,
> 
>>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
> 
>>> -Dhbase.security.logger=INFO,DRFAS]
> 
>>> 2012-11-21 13:40:57,222 DEBUG
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
> 
>>> HConnection retries=100
> 
>>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 
>>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
> 
>>> Allocating LruBlockCache with maximum size 493.8m
> 
>>> 2012-11-21 13:40:57,699 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
> 
>>> thread: Shutdownhook:regionserver60020
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop2.aj.c2fse.northgrum.com
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.0.0-12-generic
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=regionserver:60020
> 
>>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 13:40:57,719 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 12835@hadoop2
> 
>>> 2012-11-21 13:40:57,727 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:57,727 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:57,848 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 2000ms before retry #1...
> 
>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.53:2181
> 
>>> 2012-11-21 13:40:58,283 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:58,283 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /127.0.0.1:2181
> 
>>> 2012-11-21 13:40:58,726 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:58,726 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.52:2181
> 
>>> 2012-11-21 13:40:59,368 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:59,368 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:00,660 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:00,660 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:00,761 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 4000ms before retry #2...
> 
>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:01,422 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:01,422 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:02,370 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:02,370 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:02,627 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:02,627 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:03,968 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:03,969 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:04,733 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:04,733 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:04,835 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 8000ms before retry #3...
> 
>>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:05,741 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:05,741 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:06,192 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:06,192 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:07,313 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:07,313 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:08,273 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:08,273 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:09,090 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:09,090 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:09,711 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:09,711 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:11,121 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:11,121 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:11,600 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:11,600 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:12,320 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:12,320 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:12,861 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:12,861 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,962 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:12,962 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> 
>>> failed after 3 retries
> 
>>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
> 
>>> regionserver:60020 Unable to set watcher on znode /hbase/master
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,966 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
> 
>>> Received unexpected KeeperException, re-throwing exception
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,966 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> 
>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
> 
>>> during initialization, aborting
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,969 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> 
>>> loaded coprocessors are: []
> 
>>> 2012-11-21 13:41:12,969 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
> 
>>> exception during initialization, aborting
> 
>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:14,834 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:14,834 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:15,335 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:15,335 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> 
>>> server on 60020
> 
>>> 2012-11-21 13:41:15,975 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> 
>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
> 
>>> failed.  Hence aborting RS.
> 
>>> java.io.IOException: Received the shutdown message while waiting.
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:15,976 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> 
>>> loaded coprocessors are: []
> 
>>> 2012-11-21 13:41:15,976 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
> 
>>> of RS failed.  Hence aborting RS.
> 
>>> 2012-11-21 13:41:15,978 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
> 
>>> MXBean
> 
>>> 2012-11-21 13:41:15,980 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
> 
>>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
> 
>>> 2012-11-21 13:41:15,980 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
> 
>>> 2012-11-21 13:41:15,981 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
> 
>>> hook thread.
> 
>>> 2012-11-21 13:41:15,981 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
> 
>>> 
> 
>>> Finally, in the zookeeper log from hadoop1 I have:
> 
>>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:20,279 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
> 
>>> quorums
> 
>>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
> 
>>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
> 
>>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase
> 
>>> 2012-11-21 13:40:20,336 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 
>>> 2012-11-21 13:40:20,356 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
> 
>>> 0.0.0.0/0.0.0.0:2181
> 
>>> 2012-11-21 13:40:20,378 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
> 
>>> 180000
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
> 
>>> 2012-11-21 13:40:20,395 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
> 
>>> Creating with a reasonable default of 0. This should only happen when you
> 
>>> are upgrading your installation
> 
>>> 2012-11-21 13:40:20,442 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
> 
>>> 0.0.0.0/0.0.0.0:3888
> 
>>> 2012-11-21 13:40:20,456 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
> 
>>> 2012-11-21 13:40:20,458 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
> 
>>> =  0, proposed zxid=0x0
> 
>>> 2012-11-21 13:40:20,460 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
> 
>>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
> 
>>> (n.peerEPoch), LOOKING (my state)
> 
>>> 2012-11-21 13:40:20,464 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:20,465 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 400
> 
>>> 2012-11-21 13:40:21,064 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:21,065 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:21,065 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 800
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 1600
> 
>>> 2012-11-21 13:40:22,113 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55216
> 
>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /127.0.0.1:55216 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:22,373 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.52:60339
> 
>>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.52:60339 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:22,968 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.52:60342
> 
>>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.52:60342 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:23,187 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55221
> 
>>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /127.0.0.1:55221 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 3200
> 
>>> 2012-11-21 13:40:24,116 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.54:35599
> 
>>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.54:35599 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:24,176 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55225
> 
>>> ...
> 
>>> 
> 
>>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
> 
>>> in /etc/hosts):
> 
>>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
> 
>>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> 
>>> serverside HConnection retries=100
> 
>>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HMaster, port=60000
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop1
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.2.0-24-generic
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=master:60000
> 
>>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 14:46:37,087 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 12692@hadoop1
> 
>>> 2012-11-21 14:46:37,095 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 14:46:37,095 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
> 
>>> establishment complete on server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
> 
>>> 0x33b247f4c380000, negotiated timeout = 40000
> 
>>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> Responder: starting
> 
>>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> listener on 60000: starting
> 
>>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 0 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 1 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 2 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 3 on 60000: starting
> 
>>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 4 on 60000: starting
> 
>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 5 on 60000: starting
> 
>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 6 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 7 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 8 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 9 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 0 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 1 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 2 on 60000: starting
> 
>>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> 
>>> Initializing JVM Metrics with processName=Master,
> 
>>> sessionId=hadoop1,60000,1353527196915
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: revision
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsUser
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsDate
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsUrl
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: date
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsRevision
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: user
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsVersion
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: url
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: version
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 
>>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 
>>> 2012-11-21 14:46:37,272 INFO
> 
>>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
> 
>>> 2012-11-21 14:46:37,299 INFO
> 
>>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
> 
>>> directory
> 
>>> 2012-11-21 14:46:37,320 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> 
>>> is not a retry
> 
>>> 2012-11-21 14:46:37,321 INFO
> 
>>> org.apache.hadoop.hbase.master.ActiveMasterManager:
> 
>>> Master=hadoop1,60000,1353527196915
> 
>>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
> 
>>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
> 
>>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
> 
>>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
> 
>>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
> 
>>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
> 
>>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
> 
>>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
> 
>>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
> 
>>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
> 
>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> 
>>> Unhandled exception. Starting shutdown.
> 
>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> 
>>> connection exception: java.net.ConnectException: Connection refused
> 
>>>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
> 
>>>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> 
>>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> 
>>>     at $Proxy10.getProtocolVersion(Unknown Source)
> 
>>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> 
>>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> 
>>>     at
> 
>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
> 
>>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
> 
>>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
> 
>>>     at
> 
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> 
>>>     at
> 
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
> 
>>>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> 
>>>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
> 
>>>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> 
>>>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> 
>>>     at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
> 
>>>   ...
> 
>>> 
> 
>>> [Message clipped]
> 
>> 
> 
> 


Re: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by Michael Segel <mi...@hotmail.com>.
Hi Alan, 

Yes. I am suggesting that. 

Your 127.0.0.1 subnet should be localhost  only and then your other entries. 
It looks like 10.64.155.52 is the external interface (eth0) for the machine hadoop1.

Adding it to 127.0.0.1 confuses HBase since it will use the first entry it sees. (Going from memory) So it will always look to local hosts.

I think that should fix your problem. 

HTH

-Mike

On Nov 23, 2012, at 10:11 AM, "Ratner, Alan S (IS)" <Al...@ngc.com> wrote:

> Mike,
> 
> 
> 
>            Yes I do.
> 
> 
> 
> With this /etc/hosts HBase works but NX and VNC do not.
> 
> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost
> 
> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
> 
> ...
> 
> 
> 
> With this /etc/hosts NX and VNC work but HBase does not.
> 
> 127.0.0.1 hadoop1 localhost.localdomain localhost
> 
> 10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver
> 
> 10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1
> 
> ...
> 
> 
> 
> I assume from your question that if I should try replacing
> 
> 127.0.0.1 hadoop1 localhost.localdomain localhost
> 
> with simply:
> 
> 127.0.0.1 localhost
> 
> 
> 
> 
> 
> 
> 
> Alan
> 
> 
> 
> 
> 
> -----Original Message-----
> From: Michael Segel [mailto:michael_segel@hotmail.com]
> Sent: Wednesday, November 21, 2012 7:40 PM
> To: user@hbase.apache.org
> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
> 
> 
> 
> Hi,
> 
> 
> 
> Quick question...
> 
> 
> 
> DO you have 127.0.0.1 set to anything other than localhost?
> 
> 
> 
> If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.
> 
> 
> 
> If you have Hadoop up and working, then you should be able to stand up HBase on top of that.
> 
> 
> 
> Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.
> 
> What does your /etc/ hosts file look like?
> 
> 
> 
> How many machines in your cluster?
> 
> 
> 
> Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...
> 
> 
> 
> If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.
> 
> 
> 
> HTH
> 
> 
> 
> -Mike
> 
> 
> 
> On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Al...@ngc.com>> wrote:
> 
> 
> 
>> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.
> 
>> 
> 
>> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.
> 
>> 
> 
>> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
> 
>> a) revert to an old version of HBase
> 
>> b) switch to Accumulo, or
> 
>> c) switch to Cassandra.
> 
>> 
> 
>> Alan
> 
>> 
> 
>> 
> 
>> -----Original Message-----
> 
>> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> 
>> Sent: Wednesday, November 21, 2012 3:11 PM
> 
>> To: user@hbase.apache.org<ma...@hbase.apache.org>
> 
>> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
> 
>> 
> 
>> Hello Alan,
> 
>> 
> 
>>   It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
> 
>> have proper DNS resolution as it plays an important role in proper Hbase
> 
>> functioning. Also add the "hbase.zookeeper.property.clientPort" property in
> 
>> your hbase-site.xml file and see if it works for you.
> 
>> 
> 
>> Regards,
> 
>>   Mohammad Tariq
> 
>> 
> 
>> 
> 
>> 
> 
>> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>>wrote:
> 
>> 
> 
>>> I'd appreciate any suggestions as to how to get HBase up and running.
> 
>>> Right now it dies after a few seconds on all servers.  I am using Hadoop
> 
>>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
> 
>>> 
> 
>>> History: Yesterday I managed to get HBase 0.94.2 working but only after
> 
>>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
> 
>>> clocks).  All was fine until this morning when I realized I could not
> 
>>> initiate remote log-ins to my servers (using VNC or NX) until I restored
> 
>>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
> 
>>> non-working HBase.
> 
>>> 
> 
>>> With HBase managing ZK I see the following in the HBase Master and ZK
> 
>>> logs, respectively:
> 
>>> 2012-11-21 13:40:22,236 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 
> 
>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 
> 
>>> At roughly the same time (clocks not perfectly synchronized) I see this in
> 
>>> a Regionserver log:
> 
>>> 2012-11-21 13:40:57,727 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> ...
> 
>>> 2012-11-21 13:40:57,848 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 
> 
>>> Logs and configuration follows.
> 
>>> 
> 
>>> Then I tried managing ZK myself and HBase then fails for seemingly
> 
>>> different reasons.
> 
>>> 2012-11-21 14:46:37,320 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> 
>>> is not a retry
> 
>>> 
> 
>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> 
>>> Unhandled exception. Starting shutdown.
> 
>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> 
>>> connection exception: java.net.ConnectException: Connection refused
> 
>>> 
> 
>>> Both HMaster error logs (self-managed and me-managed ZK) mention the
> 
>>> 127.0.0.1 IP address instead of referring to the server by its name
> 
>>> (hadoop1) or its true IP address or simply as localhost.
> 
>>> 
> 
>>> So, start-hbase.sh works OK (HB managing ZK):
> 
>>> ngc@hadoop1:~/hbase-0.94.2$<mailto:ngc@hadoop1:~/hbase-0.94.2$> bin/start-hbase.sh
> 
>>> hadoop1: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
> 
>>> hadoop2: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
> 
>>> hadoop3: starting zookeeper, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
> 
>>> starting master, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
> 
>>> hadoop2: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
> 
>>> hadoop6: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
> 
>>> hadoop3: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
> 
>>> hadoop5: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
> 
>>> hadoop4: starting regionserver, logging to
> 
>>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
> 
>>> 
> 
>>> I have in hbase-site.xml:
> 
>>> <property>
> 
>>>   <name>hbase.cluster.distributed</name>
> 
>>>   <value>true</value>
> 
>>> </property>
> 
>>>     <property>
> 
>>>           <name>hbase.master</name>
> 
>>>           <value>hadoop1:60000</value>
> 
>>>       </property>
> 
>>> <property>
> 
>>>   <name>hbase.rootdir</name>
> 
>>>   <value>hdfs://hadoop1:9000/hbase</value>
> 
>>> </property>
> 
>>> <property>
> 
>>>   <name>hbase.zookeeper.property.dataDir</name>
> 
>>>   <value>/tmp/zookeeper_data</value>
> 
>>> </property>
> 
>>> <property>
> 
>>>   <name>hbase.zookeeper.quorum</name>
> 
>>>   <value>hadoop1,hadoop2,hadoop3</value>
> 
>>> </property>
> 
>>> 
> 
>>> I have in hbase-env.sh:
> 
>>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
> 
>>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
> 
>>> export HBASE_HEAPSIZE=2000
> 
>>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
> 
>>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
> 
>>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
> 
>>> export HBASE_MANAGES_ZK=true
> 
>>> 
> 
>>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
> 
>>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> 
>>> serverside HConnection retries=100
> 
>>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HMaster, port=60000
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop1
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.2.0-24-generic
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=master:60000
> 
>>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /127.0.0.1:2181
> 
>>> 2012-11-21 13:40:22,099 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 742@hadoop1
> 
>>> 2012-11-21 13:40:22,106 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,106 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,236 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 2000ms before retry #1...
> 
>>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.53:2181
> 
>>> 2012-11-21 13:40:22,411 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,411 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 13:40:22,747 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,747 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.52:2181
> 
>>> 2012-11-21 13:40:22,967 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:22,967 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:24,176 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:24,176 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:24,277 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 4000ms before retry #2...
> 
>>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:24,767 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:24,767 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:25,757 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:25,757 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:26,597 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:26,597 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:27,775 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:27,775 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:28,318 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:28,318 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:28,419 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 8000ms before retry #3...
> 
>>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:29,106 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:29,106 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:30,039 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:30,039 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:31,283 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:31,283 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:32,143 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:32,143 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:32,480 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:32,480 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:40:33,295 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:33,295 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:40:34,962 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:34,962 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:40:35,661 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:35,661 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:40:36,523 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:36,523 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:36,625 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>> 2012-11-21 13:40:36,625 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> 
>>> failed after 3 retries
> 
>>> 2012-11-21 13:40:36,626 ERROR
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
> 
>>> java.lang.RuntimeException: Failed construction of Master: class
> 
>>> org.apache.hadoop.hbase.master.HMaster
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
> 
>>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
> 
>>>     at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
> 
>>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
> 
>>>     at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
> 
>>>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> 
>>> Method)
> 
>>>     at
> 
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 
>>>     at
> 
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 
>>>     at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
> 
>>>     ... 5 more
> 
>>> 
> 
>>> 
> 
>>> From server hadoop2 (running regionserver, ZK, DN, TT)
> 
>>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 193105
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 193105
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 13:40:57,172 INFO
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
> 
>>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
> 
>>> 2012-11-21 13:40:57,172 INFO
> 
>>> org.apache.hadoop.hbase.util.ServerCommandLine:
> 
>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
> 
>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> 
>>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
> 
>>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
> 
>>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
> 
>>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
> 
>>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
> 
>>> -Dhbase.root.logger=INFO,DRFA,
> 
>>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
> 
>>> -Dhbase.security.logger=INFO,DRFAS]
> 
>>> 2012-11-21 13:40:57,222 DEBUG
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
> 
>>> HConnection retries=100
> 
>>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-1
> 
>>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 
>>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
> 
>>> Allocating LruBlockCache with maximum size 493.8m
> 
>>> 2012-11-21 13:40:57,699 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
> 
>>> thread: Shutdownhook:regionserver60020
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop2.aj.c2fse.northgrum.com
> 
>>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.0.0-12-generic
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=regionserver:60020
> 
>>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 13:40:57,719 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 12835@hadoop2
> 
>>> 2012-11-21 13:40:57,727 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:57,727 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:57,848 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 2000ms before retry #1...
> 
>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.53:2181
> 
>>> 2012-11-21 13:40:58,283 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:58,283 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /127.0.0.1:2181
> 
>>> 2012-11-21 13:40:58,726 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:58,726 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.52:2181
> 
>>> 2012-11-21 13:40:59,368 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:40:59,368 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:00,660 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:00,660 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:00,761 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 4000ms before retry #2...
> 
>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:01,422 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:01,422 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:02,370 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:02,370 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:02,627 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:02,627 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:03,968 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:03,969 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:04,733 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:04,733 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:04,835 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
> 
>>> Sleeping 8000ms before retry #3...
> 
>>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:05,741 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:05,741 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:06,192 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:06,192 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:07,313 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:07,313 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:08,273 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:08,273 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:09,090 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:09,090 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:09,711 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:09,711 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:11,121 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:11,121 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:11,600 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:11,600 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server hadoop1/127.0.0.1:2181
> 
>>> 2012-11-21 13:41:12,320 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:12,320 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1/127.0.0.1:2181, initiating session
> 
>>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 
>>> 2012-11-21 13:41:12,861 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:12,861 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:12,962 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> 
>>> ZooKeeper exception:
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>> 2012-11-21 13:41:12,962 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> 
>>> failed after 3 retries
> 
>>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
> 
>>> regionserver:60020 Unable to set watcher on znode /hbase/master
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,966 ERROR
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
> 
>>> Received unexpected KeeperException, re-throwing exception
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,966 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> 
>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
> 
>>> during initialization, aborting
> 
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> 
>>> KeeperErrorCode = ConnectionLoss for /hbase/master
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 
>>>     at
> 
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 
>>>     at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:12,969 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> 
>>> loaded coprocessors are: []
> 
>>> 2012-11-21 13:41:12,969 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
> 
>>> exception during initialization, aborting
> 
>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 
>>> 2012-11-21 13:41:14,834 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:14,834 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server
> 
>>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 
>>> 2012-11-21 13:41:15,335 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 13:41:15,335 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
> 
>>> read additional data from server sessionid 0x0, likely server has closed
> 
>>> socket, closing socket connection and attempting reconnect
> 
>>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> 
>>> server on 60020
> 
>>> 2012-11-21 13:41:15,975 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> 
>>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
> 
>>> failed.  Hence aborting RS.
> 
>>> java.io.IOException: Received the shutdown message while waiting.
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
> 
>>>     at java.lang.Thread.run(Thread.java:662)
> 
>>> 2012-11-21 13:41:15,976 FATAL
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> 
>>> loaded coprocessors are: []
> 
>>> 2012-11-21 13:41:15,976 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
> 
>>> of RS failed.  Hence aborting RS.
> 
>>> 2012-11-21 13:41:15,978 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
> 
>>> MXBean
> 
>>> 2012-11-21 13:41:15,980 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
> 
>>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
> 
>>> 2012-11-21 13:41:15,980 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
> 
>>> 2012-11-21 13:41:15,981 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
> 
>>> hook thread.
> 
>>> 2012-11-21 13:41:15,981 INFO
> 
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
> 
>>> 
> 
>>> Finally, in the zookeeper log from hadoop1 I have:
> 
>>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 13:40:20,279 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
> 
>>> quorums
> 
>>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
> 
>>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
> 
>>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
> 
>>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
> 
>>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> 
>>> name=log4j:logger=org.apache.hadoop.hbase
> 
>>> 2012-11-21 13:40:20,336 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 
>>> 2012-11-21 13:40:20,356 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
> 
>>> 0.0.0.0/0.0.0.0:2181
> 
>>> 2012-11-21 13:40:20,378 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
> 
>>> 180000
> 
>>> 2012-11-21 13:40:20,379 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
> 
>>> 2012-11-21 13:40:20,395 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
> 
>>> Creating with a reasonable default of 0. This should only happen when you
> 
>>> are upgrading your installation
> 
>>> 2012-11-21 13:40:20,442 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
> 
>>> 0.0.0.0/0.0.0.0:3888
> 
>>> 2012-11-21 13:40:20,456 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
> 
>>> 2012-11-21 13:40:20,458 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
> 
>>> =  0, proposed zxid=0x0
> 
>>> 2012-11-21 13:40:20,460 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
> 
>>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
> 
>>> (n.peerEPoch), LOOKING (my state)
> 
>>> 2012-11-21 13:40:20,464 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:20,465 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:20,663 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 400
> 
>>> 2012-11-21 13:40:21,064 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:21,065 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:21,065 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 800
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:21,866 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 1600
> 
>>> 2012-11-21 13:40:22,113 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55216
> 
>>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /127.0.0.1:55216 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:22,373 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.52:60339
> 
>>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.52:60339 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:22,968 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.52:60342
> 
>>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.52:60342 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:23,187 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55221
> 
>>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /127.0.0.1:55221 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (2, 0)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> 
>>> identifier, so dropping the connection: (1, 0)
> 
>>> 2012-11-21 13:40:23,467 INFO
> 
>>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> 
>>> out: 3200
> 
>>> 2012-11-21 13:40:24,116 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /10.64.155.54:35599
> 
>>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Exception causing close of session 0x0 due to java.io.IOException:
> 
>>> ZooKeeperServer not running
> 
>>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
> 
>>> Closed socket connection for client /10.64.155.54:35599 (no session
> 
>>> established for client)
> 
>>> 2012-11-21 13:40:24,176 INFO
> 
>>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> 
>>> connection from /127.0.0.1:55225
> 
>>> ...
> 
>>> 
> 
>>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
> 
>>> in /etc/hosts):
> 
>>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
> 
>>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
> 
>>> core file size          (blocks, -c) 0
> 
>>> data seg size           (kbytes, -d) unlimited
> 
>>> scheduling priority             (-e) 0
> 
>>> file size               (blocks, -f) unlimited
> 
>>> pending signals                 (-i) 386178
> 
>>> max locked memory       (kbytes, -l) 64
> 
>>> max memory size         (kbytes, -m) unlimited
> 
>>> open files                      (-n) 1024
> 
>>> pipe size            (512 bytes, -p) 8
> 
>>> POSIX message queues     (bytes, -q) 819200
> 
>>> real-time priority              (-r) 0
> 
>>> stack size              (kbytes, -s) 8192
> 
>>> cpu time               (seconds, -t) unlimited
> 
>>> max user processes              (-u) 386178
> 
>>> virtual memory          (kbytes, -v) unlimited
> 
>>> file locks                      (-x) unlimited
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> HBase 0.94.2
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 
>>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> 
>>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 
>>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> 
>>> serverside HConnection retries=100
> 
>>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> 
>>> Thread-2
> 
>>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> 
>>> Initializing RPC Metrics with hostName=HMaster, port=60000
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:host.name=hadoop1
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.version=1.6.0_25
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.vendor=Sun Microsystems Inc.
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.io.tmpdir=/tmp
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:java.compiler=<NA>
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.name=Linux
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.arch=amd64
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:os.version=3.2.0-24-generic
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.name=ngc
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.home=/home/ngc
> 
>>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> 
>>> environment:user.dir=/home/ngc/hbase-0.94.2
> 
>>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
> 
>>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> 
>>> sessionTimeout=180000 watcher=master:60000
> 
>>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
> 
>>> socket connection to server /10.64.155.54:2181
> 
>>> 2012-11-21 14:46:37,087 INFO
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> 
>>> this process is 12692@hadoop1
> 
>>> 2012-11-21 14:46:37,095 WARN
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> 
>>> java.lang.SecurityException: Unable to locate a login configuration
> 
>>> occurred when trying to find JAAS configuration.
> 
>>> 2012-11-21 14:46:37,095 INFO
> 
>>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> 
>>> SASL-authenticate because the default JAAS configuration section 'Client'
> 
>>> could not be found. If you are not using SASL, you may ignore this. On the
> 
>>> other hand, if you expected SASL to work, please fix your JAAS
> 
>>> configuration.
> 
>>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
> 
>>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> 
>>> initiating session
> 
>>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
> 
>>> establishment complete on server
> 
>>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
> 
>>> 0x33b247f4c380000, negotiated timeout = 40000
> 
>>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> Responder: starting
> 
>>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> listener on 60000: starting
> 
>>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 0 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 1 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 2 on 60000: starting
> 
>>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 3 on 60000: starting
> 
>>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 4 on 60000: starting
> 
>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 5 on 60000: starting
> 
>>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 6 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 7 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 8 on 60000: starting
> 
>>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> 
>>> handler 9 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 0 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 1 on 60000: starting
> 
>>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> 
>>> Server handler 2 on 60000: starting
> 
>>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> 
>>> Initializing JVM Metrics with processName=Master,
> 
>>> sessionId=hadoop1,60000,1353527196915
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: revision
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsUser
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsDate
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsUrl
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: date
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsRevision
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: user
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: hdfsVersion
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: url
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> 
>>> MetricsString added: version
> 
>>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 
>>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 
>>> 2012-11-21 14:46:37,272 INFO
> 
>>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
> 
>>> 2012-11-21 14:46:37,299 INFO
> 
>>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
> 
>>> directory
> 
>>> 2012-11-21 14:46:37,320 WARN
> 
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> 
>>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> 
>>> is not a retry
> 
>>> 2012-11-21 14:46:37,321 INFO
> 
>>> org.apache.hadoop.hbase.master.ActiveMasterManager:
> 
>>> Master=hadoop1,60000,1353527196915
> 
>>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
> 
>>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
> 
>>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
> 
>>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
> 
>>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
> 
>>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
> 
>>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
> 
>>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
> 
>>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
> 
>>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
> 
>>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
> 
>>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> 
>>> Unhandled exception. Starting shutdown.
> 
>>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> 
>>> connection exception: java.net.ConnectException: Connection refused
> 
>>>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
> 
>>>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> 
>>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> 
>>>     at $Proxy10.getProtocolVersion(Unknown Source)
> 
>>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> 
>>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> 
>>>     at
> 
>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
> 
>>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
> 
>>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
> 
>>>     at
> 
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> 
>>>     at
> 
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
> 
>>>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> 
>>>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
> 
>>>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> 
>>>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> 
>>>     at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
> 
>>>     at
> 
>>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
> 
>>>   ...
> 
>>> 
> 
>>> [Message clipped]
> 
>> 
> 
> 


RE: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by "Ratner, Alan S (IS)" <Al...@ngc.com>.
Mike,



            Yes I do.



With this /etc/hosts HBase works but NX and VNC do not.

10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver localhost

10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1

...



With this /etc/hosts NX and VNC work but HBase does not.

127.0.0.1 hadoop1 localhost.localdomain localhost

10.64.155.52 hadoop1.aj.c2fse.northgrum.com hadoop1 hbase-masterserver hbase-nameserver

10.64.155.53 hadoop2.aj.c2fse.northgrum.com hadoop2 hbase-regionserver1

...



I assume from your question that if I should try replacing

127.0.0.1 hadoop1 localhost.localdomain localhost

with simply:

127.0.0.1 localhost







Alan





-----Original Message-----
From: Michael Segel [mailto:michael_segel@hotmail.com]
Sent: Wednesday, November 21, 2012 7:40 PM
To: user@hbase.apache.org
Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)



Hi,



Quick question...



DO you have 127.0.0.1 set to anything other than localhost?



If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files.



If you have Hadoop up and working, then you should be able to stand up HBase on top of that.



Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost.

What does your /etc/ hosts file look like?



How many machines in your cluster?



Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ...



If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense.



HTH



-Mike



On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Al...@ngc.com>> wrote:



> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.

>

> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.

>

> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:

> a) revert to an old version of HBase

> b) switch to Accumulo, or

> c) switch to Cassandra.

>

> Alan

>

>

> -----Original Message-----

> From: Mohammad Tariq [mailto:dontariq@gmail.com]

> Sent: Wednesday, November 21, 2012 3:11 PM

> To: user@hbase.apache.org<ma...@hbase.apache.org>

> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

>

> Hello Alan,

>

>    It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you

> have proper DNS resolution as it plays an important role in proper Hbase

> functioning. Also add the "hbase.zookeeper.property.clientPort" property in

> your hbase-site.xml file and see if it works for you.

>

> Regards,

>    Mohammad Tariq

>

>

>

> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>>wrote:

>

>> I'd appreciate any suggestions as to how to get HBase up and running.

>> Right now it dies after a few seconds on all servers.  I am using Hadoop

>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.

>>

>> History: Yesterday I managed to get HBase 0.94.2 working but only after

>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my

>> clocks).  All was fine until this morning when I realized I could not

>> initiate remote log-ins to my servers (using VNC or NX) until I restored

>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a

>> non-working HBase.

>>

>> With HBase managing ZK I see the following in the HBase Master and ZK

>> logs, respectively:

>> 2012-11-21 13:40:22,236 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>>

>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>>

>> At roughly the same time (clocks not perfectly synchronized) I see this in

>> a Regionserver log:

>> 2012-11-21 13:40:57,727 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> ...

>> 2012-11-21 13:40:57,848 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>>

>> Logs and configuration follows.

>>

>> Then I tried managing ZK myself and HBase then fails for seemingly

>> different reasons.

>> 2012-11-21 14:46:37,320 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node

>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this

>> is not a retry

>>

>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:

>> Unhandled exception. Starting shutdown.

>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on

>> connection exception: java.net.ConnectException: Connection refused

>>

>> Both HMaster error logs (self-managed and me-managed ZK) mention the

>> 127.0.0.1 IP address instead of referring to the server by its name

>> (hadoop1) or its true IP address or simply as localhost.

>>

>> So, start-hbase.sh works OK (HB managing ZK):

>> ngc@hadoop1:~/hbase-0.94.2$<mailto:ngc@hadoop1:~/hbase-0.94.2$> bin/start-hbase.sh

>> hadoop1: starting zookeeper, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out

>> hadoop2: starting zookeeper, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out

>> hadoop3: starting zookeeper, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out

>> starting master, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out

>> hadoop2: starting regionserver, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out

>> hadoop6: starting regionserver, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out

>> hadoop3: starting regionserver, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out

>> hadoop5: starting regionserver, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out

>> hadoop4: starting regionserver, logging to

>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out

>>

>> I have in hbase-site.xml:

>>  <property>

>>    <name>hbase.cluster.distributed</name>

>>    <value>true</value>

>>  </property>

>>      <property>

>>            <name>hbase.master</name>

>>            <value>hadoop1:60000</value>

>>        </property>

>>  <property>

>>    <name>hbase.rootdir</name>

>>    <value>hdfs://hadoop1:9000/hbase</value>

>>  </property>

>>  <property>

>>    <name>hbase.zookeeper.property.dataDir</name>

>>    <value>/tmp/zookeeper_data</value>

>>  </property>

>>  <property>

>>    <name>hbase.zookeeper.quorum</name>

>>    <value>hadoop1,hadoop2,hadoop3</value>

>> </property>

>>

>> I have in hbase-env.sh:

>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/

>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4

>> export HBASE_HEAPSIZE=2000

>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError

>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"

>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs

>> export HBASE_MANAGES_ZK=true

>>

>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)

>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1

>> core file size          (blocks, -c) 0

>> data seg size           (kbytes, -d) unlimited

>> scheduling priority             (-e) 0

>> file size               (blocks, -f) unlimited

>> pending signals                 (-i) 386178

>> max locked memory       (kbytes, -l) 64

>> max memory size         (kbytes, -m) unlimited

>> open files                      (-n) 1024

>> pipe size            (512 bytes, -p) 8

>> POSIX message queues     (bytes, -q) 819200

>> real-time priority              (-r) 0

>> stack size              (kbytes, -s) 8192

>> cpu time               (seconds, -t) unlimited

>> max user processes              (-u) 386178

>> virtual memory          (kbytes, -v) unlimited

>> file locks                      (-x) unlimited

>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> HBase 0.94.2

>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367

>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012

>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set

>> serverside HConnection retries=100

>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:

>> Initializing RPC Metrics with hostName=HMaster, port=60000

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:host.name=hadoop1

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.version=1.6.0_25

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.vendor=Sun Microsystems Inc.

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.home=/home/ngc/jdk1.6.0_25/jre

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.io.tmpdir=/tmp

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.compiler=<NA>

>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.name=Linux

>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.arch=amd64

>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.version=3.2.0-24-generic

>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.name=ngc

>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.home=/home/ngc

>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.dir=/home/ngc/hbase-0.94.2

>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating

>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181

>> sessionTimeout=180000 watcher=master:60000

>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /127.0.0.1:2181

>> 2012-11-21 13:40:22,099 INFO

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of

>> this process is 742@hadoop1

>> 2012-11-21 13:40:22,106 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:22,106 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:22,236 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 2000ms before retry #1...

>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.53:2181

>> 2012-11-21 13:40:22,411 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:22,411 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.54:2181

>> 2012-11-21 13:40:22,747 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:22,747 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.52:2181

>> 2012-11-21 13:40:22,967 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:22,967 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:40:24,176 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:24,176 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:24,277 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 4000ms before retry #2...

>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:40:24,767 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:24,767 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:40:25,757 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:25,757 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:40:26,597 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:26,597 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:40:27,775 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:27,775 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:40:28,318 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:28,318 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:28,419 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 8000ms before retry #3...

>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:40:29,106 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:29,106 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:40:30,039 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:30,039 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:40:31,283 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:31,283 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:40:32,143 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:32,143 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:40:32,480 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:32,480 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:40:33,295 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:33,295 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:40:34,962 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:34,962 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:40:35,661 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:35,661 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:40:36,523 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:36,523 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:36,625 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>> 2012-11-21 13:40:36,625 ERROR

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists

>> failed after 3 retries

>> 2012-11-21 13:40:36,626 ERROR

>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master

>> java.lang.RuntimeException: Failed construction of Master: class

>> org.apache.hadoop.hbase.master.HMaster

>>      at

>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)

>>      at

>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)

>>      at

>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)

>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)

>>      at

>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)

>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)

>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)

>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)

>>      at

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)

>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)

>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native

>> Method)

>>      at

>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)

>>      at

>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)

>>      at

>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)

>>      ... 5 more

>>

>>

>> From server hadoop2 (running regionserver, ZK, DN, TT)

>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2

>> core file size          (blocks, -c) 0

>> data seg size           (kbytes, -d) unlimited

>> scheduling priority             (-e) 0

>> file size               (blocks, -f) unlimited

>> pending signals                 (-i) 193105

>> max locked memory       (kbytes, -l) 64

>> max memory size         (kbytes, -m) unlimited

>> open files                      (-n) 1024

>> pipe size            (512 bytes, -p) 8

>> POSIX message queues     (bytes, -q) 819200

>> real-time priority              (-r) 0

>> stack size              (kbytes, -s) 8192

>> cpu time               (seconds, -t) unlimited

>> max user processes              (-u) 193105

>> virtual memory          (kbytes, -v) unlimited

>> file locks                      (-x) unlimited

>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> HBase 0.94.2

>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367

>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012

>> 2012-11-21 13:40:57,172 INFO

>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)

>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11

>> 2012-11-21 13:40:57,172 INFO

>> org.apache.hadoop.hbase.util.ServerCommandLine:

>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,

>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,

>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,

>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,

>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,

>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,

>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,

>> -Dhbase.root.logger=INFO,DRFA,

>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,

>> -Dhbase.security.logger=INFO,DRFAS]

>> 2012-11-21 13:40:57,222 DEBUG

>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside

>> HConnection retries=100

>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-1

>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:

>> Initializing RPC Metrics with hostName=HRegionServer, port=60020

>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:

>> Allocating LruBlockCache with maximum size 493.8m

>> 2012-11-21 13:40:57,699 INFO

>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook

>> thread: Shutdownhook:regionserver60020

>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT

>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:host.name=hadoop2.aj.c2fse.northgrum.com

>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.version=1.6.0_25

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.vendor=Sun Microsystems Inc.

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.home=/home/ngc/jdk1.6.0_25/jre

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.io.tmpdir=/tmp

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.compiler=<NA>

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.name=Linux

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.arch=amd64

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.version=3.0.0-12-generic

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.name=ngc

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.home=/home/ngc

>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.dir=/home/ngc/hbase-0.94.2

>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating

>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181

>> sessionTimeout=180000 watcher=regionserver:60020

>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.54:2181

>> 2012-11-21 13:40:57,719 INFO

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of

>> this process is 12835@hadoop2

>> 2012-11-21 13:40:57,727 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:57,727 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:57,848 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 2000ms before retry #1...

>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.53:2181

>> 2012-11-21 13:40:58,283 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:58,283 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /127.0.0.1:2181

>> 2012-11-21 13:40:58,726 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:58,726 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.52:2181

>> 2012-11-21 13:40:59,368 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:40:59,368 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:41:00,660 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:00,660 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:00,761 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 4000ms before retry #2...

>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:41:01,422 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:01,422 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:41:02,370 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:02,370 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:41:02,627 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:02,627 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:41:03,968 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:03,969 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:41:04,733 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:04,733 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:04,835 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:

>> Sleeping 8000ms before retry #3...

>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:41:05,741 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:05,741 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:41:06,192 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:06,192 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:41:07,313 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:07,313 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:41:08,273 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:08,273 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:41:09,090 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:09,090 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:41:09,711 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:09,711 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:41:11,121 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:11,121 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:41:11,600 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:11,600 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server hadoop1/127.0.0.1:2181

>> 2012-11-21 13:41:12,320 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:12,320 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1/127.0.0.1:2181, initiating session

>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181

>> 2012-11-21 13:41:12,861 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:12,861 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,

>> initiating session

>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:12,962 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient

>> ZooKeeper exception:

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>> 2012-11-21 13:41:12,962 ERROR

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists

>> failed after 3 retries

>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:

>> regionserver:60020 Unable to set watcher on znode /hbase/master

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)

>>      at

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)

>>      at java.lang.Thread.run(Thread.java:662)

>> 2012-11-21 13:41:12,966 ERROR

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020

>> Received unexpected KeeperException, re-throwing exception

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)

>>      at

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)

>>      at java.lang.Thread.run(Thread.java:662)

>> 2012-11-21 13:41:12,966 FATAL

>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server

>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception

>> during initialization, aborting

>> org.apache.zookeeper.KeeperException$ConnectionLossException:

>> KeeperErrorCode = ConnectionLoss for /hbase/master

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

>>      at

>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)

>>      at

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)

>>      at

>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)

>>      at java.lang.Thread.run(Thread.java:662)

>> 2012-11-21 13:41:12,969 FATAL

>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:

>> loaded coprocessors are: []

>> 2012-11-21 13:41:12,969 INFO

>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected

>> exception during initialization, aborting

>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181

>> 2012-11-21 13:41:14,834 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:14,834 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server

>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181

>> 2012-11-21 13:41:15,335 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 13:41:15,335 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,

>> initiating session

>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to

>> read additional data from server sessionid 0x0, likely server has closed

>> socket, closing socket connection and attempting reconnect

>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping

>> server on 60020

>> 2012-11-21 13:41:15,975 FATAL

>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server

>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS

>> failed.  Hence aborting RS.

>> java.io.IOException: Received the shutdown message while waiting.

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)

>>      at

>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)

>>      at java.lang.Thread.run(Thread.java:662)

>> 2012-11-21 13:41:15,976 FATAL

>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:

>> loaded coprocessors are: []

>> 2012-11-21 13:41:15,976 INFO

>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization

>> of RS failed.  Hence aborting RS.

>> 2012-11-21 13:41:15,978 INFO

>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer

>> MXBean

>> 2012-11-21 13:41:15,980 INFO

>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;

>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]

>> 2012-11-21 13:41:15,980 INFO

>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook

>> 2012-11-21 13:41:15,981 INFO

>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown

>> hook thread.

>> 2012-11-21 13:41:15,981 INFO

>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.

>>

>> Finally, in the zookeeper log from hadoop1 I have:

>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1

>> core file size          (blocks, -c) 0

>> data seg size           (kbytes, -d) unlimited

>> scheduling priority             (-e) 0

>> file size               (blocks, -f) unlimited

>> pending signals                 (-i) 386178

>> max locked memory       (kbytes, -l) 64

>> max memory size         (kbytes, -m) unlimited

>> open files                      (-n) 1024

>> pipe size            (512 bytes, -p) 8

>> POSIX message queues     (bytes, -q) 819200

>> real-time priority              (-r) 0

>> stack size              (kbytes, -s) 8192

>> cpu time               (seconds, -t) unlimited

>> max user processes              (-u) 386178

>> virtual memory          (kbytes, -v) unlimited

>> file locks                      (-x) unlimited

>> 2012-11-21 13:40:20,279 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority

>> quorums

>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:

>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,

>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes

>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:

>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,

>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo

>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:

>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,

>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig

>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:

>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,

>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration

>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.

>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,

>> name=log4j:logger=org.apache.hadoop.hbase

>> 2012-11-21 13:40:20,336 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer

>> 2012-11-21 13:40:20,356 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port

>> 0.0.0.0/0.0.0.0:2181

>> 2012-11-21 13:40:20,378 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000

>> 2012-11-21 13:40:20,379 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1

>> 2012-11-21 13:40:20,379 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to

>> 180000

>> 2012-11-21 13:40:20,379 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10

>> 2012-11-21 13:40:20,395 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!

>> Creating with a reasonable default of 0. This should only happen when you

>> are upgrading your installation

>> 2012-11-21 13:40:20,442 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:

>> 0.0.0.0/0.0.0.0:3888

>> 2012-11-21 13:40:20,456 INFO

>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING

>> 2012-11-21 13:40:20,458 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id

>> =  0, proposed zxid=0x0

>> 2012-11-21 13:40:20,460 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0

>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0

>> (n.peerEPoch), LOOKING (my state)

>> 2012-11-21 13:40:20,464 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (1, 0)

>> 2012-11-21 13:40:20,465 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (2, 0)

>> 2012-11-21 13:40:20,663 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (2, 0)

>> 2012-11-21 13:40:20,663 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (1, 0)

>> 2012-11-21 13:40:20,663 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time

>> out: 400

>> 2012-11-21 13:40:21,064 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (2, 0)

>> 2012-11-21 13:40:21,065 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (1, 0)

>> 2012-11-21 13:40:21,065 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time

>> out: 800

>> 2012-11-21 13:40:21,866 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (2, 0)

>> 2012-11-21 13:40:21,866 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (1, 0)

>> 2012-11-21 13:40:21,866 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time

>> out: 1600

>> 2012-11-21 13:40:22,113 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /127.0.0.1:55216

>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:

>> Closed socket connection for client /127.0.0.1:55216 (no session

>> established for client)

>> 2012-11-21 13:40:22,373 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /10.64.155.52:60339

>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:

>> Closed socket connection for client /10.64.155.52:60339 (no session

>> established for client)

>> 2012-11-21 13:40:22,968 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /10.64.155.52:60342

>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:

>> Closed socket connection for client /10.64.155.52:60342 (no session

>> established for client)

>> 2012-11-21 13:40:23,187 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /127.0.0.1:55221

>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:

>> Closed socket connection for client /127.0.0.1:55221 (no session

>> established for client)

>> 2012-11-21 13:40:23,467 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (2, 0)

>> 2012-11-21 13:40:23,467 INFO

>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server

>> identifier, so dropping the connection: (1, 0)

>> 2012-11-21 13:40:23,467 INFO

>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time

>> out: 3200

>> 2012-11-21 13:40:24,116 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /10.64.155.54:35599

>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:

>> Exception causing close of session 0x0 due to java.io.IOException:

>> ZooKeeperServer not running

>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:

>> Closed socket connection for client /10.64.155.54:35599 (no session

>> established for client)

>> 2012-11-21 13:40:24,176 INFO

>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket

>> connection from /127.0.0.1:55225

>> ...

>>

>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem

>> in /etc/hosts):

>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)

>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1

>> core file size          (blocks, -c) 0

>> data seg size           (kbytes, -d) unlimited

>> scheduling priority             (-e) 0

>> file size               (blocks, -f) unlimited

>> pending signals                 (-i) 386178

>> max locked memory       (kbytes, -l) 64

>> max memory size         (kbytes, -m) unlimited

>> open files                      (-n) 1024

>> pipe size            (512 bytes, -p) 8

>> POSIX message queues     (bytes, -q) 819200

>> real-time priority              (-r) 0

>> stack size              (kbytes, -s) 8192

>> cpu time               (seconds, -t) unlimited

>> max user processes              (-u) 386178

>> virtual memory          (kbytes, -v) unlimited

>> file locks                      (-x) unlimited

>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> HBase 0.94.2

>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367

>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:

>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012

>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set

>> serverside HConnection retries=100

>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting

>> Thread-2

>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:

>> Initializing RPC Metrics with hostName=HMaster, port=60000

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:host.name=hadoop1

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.version=1.6.0_25

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.vendor=Sun Microsystems Inc.

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.home=/home/ngc/jdk1.6.0_25/jre

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.io.tmpdir=/tmp

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:java.compiler=<NA>

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.name=Linux

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.arch=amd64

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:os.version=3.2.0-24-generic

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.name=ngc

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.home=/home/ngc

>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client

>> environment:user.dir=/home/ngc/hbase-0.94.2

>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating

>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181

>> sessionTimeout=180000 watcher=master:60000

>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening

>> socket connection to server /10.64.155.54:2181

>> 2012-11-21 14:46:37,087 INFO

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of

>> this process is 12692@hadoop1

>> 2012-11-21 14:46:37,095 WARN

>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:

>> java.lang.SecurityException: Unable to locate a login configuration

>> occurred when trying to find JAAS configuration.

>> 2012-11-21 14:46:37,095 INFO

>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not

>> SASL-authenticate because the default JAAS configuration section 'Client'

>> could not be found. If you are not using SASL, you may ignore this. On the

>> other hand, if you expected SASL to work, please fix your JAAS

>> configuration.

>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket

>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,

>> initiating session

>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session

>> establishment complete on server

>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =

>> 0x33b247f4c380000, negotiated timeout = 40000

>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> Responder: starting

>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> listener on 60000: starting

>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 0 on 60000: starting

>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 1 on 60000: starting

>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 2 on 60000: starting

>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 3 on 60000: starting

>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 4 on 60000: starting

>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 5 on 60000: starting

>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 6 on 60000: starting

>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 7 on 60000: starting

>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 8 on 60000: starting

>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server

>> handler 9 on 60000: starting

>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC

>> Server handler 0 on 60000: starting

>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC

>> Server handler 1 on 60000: starting

>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC

>> Server handler 2 on 60000: starting

>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:

>> Initializing JVM Metrics with processName=Master,

>> sessionId=hadoop1,60000,1353527196915

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: revision

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: hdfsUser

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: hdfsDate

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: hdfsUrl

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: date

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: hdfsRevision

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: user

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: hdfsVersion

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: url

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:

>> MetricsString added: version

>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo

>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo

>> 2012-11-21 14:46:37,272 INFO

>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized

>> 2012-11-21 14:46:37,299 INFO

>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for

>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master

>> directory

>> 2012-11-21 14:46:37,320 WARN

>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node

>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this

>> is not a retry

>> 2012-11-21 14:46:37,321 INFO

>> org.apache.hadoop.hbase.master.ActiveMasterManager:

>> Master=hadoop1,60000,1353527196915

>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).

>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).

>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).

>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).

>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).

>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).

>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).

>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).

>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).

>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying

>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).

>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:

>> Unhandled exception. Starting shutdown.

>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on

>> connection exception: java.net.ConnectException: Connection refused

>>      at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)

>>      at org.apache.hadoop.ipc.Client.call(Client.java:1075)

>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)

>>      at $Proxy10.getProtocolVersion(Unknown Source)

>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)

>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)

>>      at

>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)

>>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)

>>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)

>>      at

>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)

>>      at

>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)

>>      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)

>>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)

>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)

>>      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)

>>      at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)

>>      at

>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)

>>      at

>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)

>>    ...

>>

>> [Message clipped]

>



Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by Michael Segel <mi...@hotmail.com>.
Hi,

Quick question... 

DO you have 127.0.0.1 set to anything other than localhost? 

If not, then it should be fine and you may want to revert to hard coded IP addresses on your other configuration files. 

If you have Hadoop up and working, then you should be able to stand up HBase on top of that. 

Just doing a quick look, and it seems that your name for your hadoop is resolving to your localhost. 
What does your /etc/ hosts file look like? 

How many machines in your cluster? 

Have you thought about pulling down a 'free' copy of Cloudera, MapR or if Hortonworks has one ... 

If you're thinking about using HBase as a standalone instance and don't care about Map/Reduce, maybe going with something else would make sense. 

HTH

-Mike

On Nov 21, 2012, at 3:02 PM, "Ratner, Alan S (IS)" <Al...@ngc.com> wrote:

> Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.
> 
> I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.  
> 
> It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
> a) revert to an old version of HBase
> b) switch to Accumulo, or
> c) switch to Cassandra.
> 
> Alan 
> 
> 
> -----Original Message-----
> From: Mohammad Tariq [mailto:dontariq@gmail.com] 
> Sent: Wednesday, November 21, 2012 3:11 PM
> To: user@hbase.apache.org
> Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)
> 
> Hello Alan,
> 
>    It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
> have proper DNS resolution as it plays an important role in proper Hbase
> functioning. Also add the "hbase.zookeeper.property.clientPort" property in
> your hbase-site.xml file and see if it works for you.
> 
> Regards,
>    Mohammad Tariq
> 
> 
> 
> On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>wrote:
> 
>> I'd appreciate any suggestions as to how to get HBase up and running.
>> Right now it dies after a few seconds on all servers.  I am using Hadoop
>> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
>> 
>> History: Yesterday I managed to get HBase 0.94.2 working but only after
>> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
>> clocks).  All was fine until this morning when I realized I could not
>> initiate remote log-ins to my servers (using VNC or NX) until I restored
>> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
>> non-working HBase.
>> 
>> With HBase managing ZK I see the following in the HBase Master and ZK
>> logs, respectively:
>> 2012-11-21 13:40:22,236 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>> 
>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 
>> At roughly the same time (clocks not perfectly synchronized) I see this in
>> a Regionserver log:
>> 2012-11-21 13:40:57,727 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> ...
>> 2012-11-21 13:40:57,848 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 
>> Logs and configuration follows.
>> 
>> Then I tried managing ZK myself and HBase then fails for seemingly
>> different reasons.
>> 2012-11-21 14:46:37,320 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> is not a retry
>> 
>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> Unhandled exception. Starting shutdown.
>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused
>> 
>> Both HMaster error logs (self-managed and me-managed ZK) mention the
>> 127.0.0.1 IP address instead of referring to the server by its name
>> (hadoop1) or its true IP address or simply as localhost.
>> 
>> So, start-hbase.sh works OK (HB managing ZK):
>> ngc@hadoop1:~/hbase-0.94.2$ bin/start-hbase.sh
>> hadoop1: starting zookeeper, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
>> hadoop2: starting zookeeper, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
>> hadoop3: starting zookeeper, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
>> starting master, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
>> hadoop2: starting regionserver, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
>> hadoop6: starting regionserver, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
>> hadoop3: starting regionserver, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
>> hadoop5: starting regionserver, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
>> hadoop4: starting regionserver, logging to
>> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
>> 
>> I have in hbase-site.xml:
>>  <property>
>>    <name>hbase.cluster.distributed</name>
>>    <value>true</value>
>>  </property>
>>      <property>
>>            <name>hbase.master</name>
>>            <value>hadoop1:60000</value>
>>        </property>
>>  <property>
>>    <name>hbase.rootdir</name>
>>    <value>hdfs://hadoop1:9000/hbase</value>
>>  </property>
>>  <property>
>>    <name>hbase.zookeeper.property.dataDir</name>
>>    <value>/tmp/zookeeper_data</value>
>>  </property>
>>  <property>
>>    <name>hbase.zookeeper.quorum</name>
>>    <value>hadoop1,hadoop2,hadoop3</value>
>> </property>
>> 
>> I have in hbase-env.sh:
>> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
>> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
>> export HBASE_HEAPSIZE=2000
>> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
>> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
>> export HBASE_MANAGES_ZK=true
>> 
>> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
>> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 386178
>> max locked memory       (kbytes, -l) 64
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 386178
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> HBase 0.94.2
>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> serverside HConnection retries=100
>> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:host.name=hadoop1
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.version=1.6.0_25
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.vendor=Sun Microsystems Inc.
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.io.tmpdir=/tmp
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.compiler=<NA>
>> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.name=Linux
>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.arch=amd64
>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.version=3.2.0-24-generic
>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.name=ngc
>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.home=/home/ngc
>> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> sessionTimeout=180000 watcher=master:60000
>> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /127.0.0.1:2181
>> 2012-11-21 13:40:22,099 INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> this process is 742@hadoop1
>> 2012-11-21 13:40:22,106 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:22,106 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:22,236 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 2000ms before retry #1...
>> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.53:2181
>> 2012-11-21 13:40:22,411 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:22,411 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.54:2181
>> 2012-11-21 13:40:22,747 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:22,747 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.52:2181
>> 2012-11-21 13:40:22,967 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:22,967 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:40:24,176 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:24,176 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:24,277 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 4000ms before retry #2...
>> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:40:24,767 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:24,767 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:40:25,757 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:25,757 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:40:26,597 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:26,597 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:40:27,775 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:27,775 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:40:28,318 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:28,318 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:28,419 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 8000ms before retry #3...
>> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:40:29,106 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:29,106 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:40:30,039 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:30,039 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:40:31,283 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:31,283 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:40:32,143 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:32,143 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:40:32,480 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:32,480 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:40:33,295 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:33,295 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:40:34,962 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:34,962 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:40:35,661 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:35,661 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:40:36,523 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:36,523 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:36,625 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>> 2012-11-21 13:40:36,625 ERROR
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> failed after 3 retries
>> 2012-11-21 13:40:36,626 ERROR
>> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
>> java.lang.RuntimeException: Failed construction of Master: class
>> org.apache.hadoop.hbase.master.HMaster
>>      at
>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
>>      at
>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
>>      at
>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
>>      at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>      at
>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
>>      at
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
>>      ... 5 more
>> 
>> 
>> From server hadoop2 (running regionserver, ZK, DN, TT)
>> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 193105
>> max locked memory       (kbytes, -l) 64
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 193105
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> HBase 0.94.2
>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 2012-11-21 13:40:57,172 INFO
>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
>> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
>> 2012-11-21 13:40:57,172 INFO
>> org.apache.hadoop.hbase.util.ServerCommandLine:
>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
>> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
>> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
>> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
>> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
>> -Dhbase.root.logger=INFO,DRFA,
>> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
>> -Dhbase.security.logger=INFO,DRFAS]
>> 2012-11-21 13:40:57,222 DEBUG
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
>> HConnection retries=100
>> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-1
>> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> Initializing RPC Metrics with hostName=HRegionServer, port=60020
>> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
>> Allocating LruBlockCache with maximum size 493.8m
>> 2012-11-21 13:40:57,699 INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
>> thread: Shutdownhook:regionserver60020
>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:host.name=hadoop2.aj.c2fse.northgrum.com
>> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.version=1.6.0_25
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.vendor=Sun Microsystems Inc.
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.io.tmpdir=/tmp
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.compiler=<NA>
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.name=Linux
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.arch=amd64
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.version=3.0.0-12-generic
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.name=ngc
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.home=/home/ngc
>> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> sessionTimeout=180000 watcher=regionserver:60020
>> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.54:2181
>> 2012-11-21 13:40:57,719 INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> this process is 12835@hadoop2
>> 2012-11-21 13:40:57,727 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:57,727 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:57,848 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 2000ms before retry #1...
>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.53:2181
>> 2012-11-21 13:40:58,283 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:58,283 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /127.0.0.1:2181
>> 2012-11-21 13:40:58,726 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:58,726 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.52:2181
>> 2012-11-21 13:40:59,368 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:40:59,368 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:41:00,660 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:00,660 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:00,761 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 4000ms before retry #2...
>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:41:01,422 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:01,422 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:41:02,370 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:02,370 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:41:02,627 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:02,627 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:41:03,968 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:03,969 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:41:04,733 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:04,733 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:04,835 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
>> Sleeping 8000ms before retry #3...
>> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:41:05,741 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:05,741 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:41:06,192 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:06,192 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:41:07,313 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:07,313 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:41:08,273 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:08,273 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:41:09,090 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:09,090 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:41:09,711 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:09,711 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:41:11,121 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:11,121 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:41:11,600 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:11,600 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server hadoop1/127.0.0.1:2181
>> 2012-11-21 13:41:12,320 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:12,320 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1/127.0.0.1:2181, initiating session
>> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
>> 2012-11-21 13:41:12,861 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:12,861 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
>> initiating session
>> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:12,962 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
>> ZooKeeper exception:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>> 2012-11-21 13:41:12,962 ERROR
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
>> failed after 3 retries
>> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
>> regionserver:60020 Unable to set watcher on znode /hbase/master
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>>      at
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>>      at java.lang.Thread.run(Thread.java:662)
>> 2012-11-21 13:41:12,966 ERROR
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
>> Received unexpected KeeperException, re-throwing exception
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>>      at
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>>      at java.lang.Thread.run(Thread.java:662)
>> 2012-11-21 13:41:12,966 FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
>> during initialization, aborting
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase/master
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>>      at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>      at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>>      at
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>>      at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>>      at java.lang.Thread.run(Thread.java:662)
>> 2012-11-21 13:41:12,969 FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> loaded coprocessors are: []
>> 2012-11-21 13:41:12,969 INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
>> exception during initialization, aborting
>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
>> 2012-11-21 13:41:14,834 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:14,834 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server
>> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
>> 2012-11-21 13:41:15,335 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 13:41:15,335 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
>> initiating session
>> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
>> read additional data from server sessionid 0x0, likely server has closed
>> socket, closing socket connection and attempting reconnect
>> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
>> server on 60020
>> 2012-11-21 13:41:15,975 FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
>> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
>> failed.  Hence aborting RS.
>> java.io.IOException: Received the shutdown message while waiting.
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>>      at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>>      at java.lang.Thread.run(Thread.java:662)
>> 2012-11-21 13:41:15,976 FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
>> loaded coprocessors are: []
>> 2012-11-21 13:41:15,976 INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
>> of RS failed.  Hence aborting RS.
>> 2012-11-21 13:41:15,978 INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
>> MXBean
>> 2012-11-21 13:41:15,980 INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
>> 2012-11-21 13:41:15,980 INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
>> 2012-11-21 13:41:15,981 INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
>> hook thread.
>> 2012-11-21 13:41:15,981 INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
>> 
>> Finally, in the zookeeper log from hadoop1 I have:
>> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 386178
>> max locked memory       (kbytes, -l) 64
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 386178
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 2012-11-21 13:40:20,279 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
>> quorums
>> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
>> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
>> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
>> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
>> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
>> name=log4j:logger=org.apache.hadoop.hbase
>> 2012-11-21 13:40:20,336 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
>> 2012-11-21 13:40:20,356 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
>> 0.0.0.0/0.0.0.0:2181
>> 2012-11-21 13:40:20,378 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
>> 2012-11-21 13:40:20,379 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
>> 2012-11-21 13:40:20,379 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
>> 180000
>> 2012-11-21 13:40:20,379 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
>> 2012-11-21 13:40:20,395 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
>> Creating with a reasonable default of 0. This should only happen when you
>> are upgrading your installation
>> 2012-11-21 13:40:20,442 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
>> 0.0.0.0/0.0.0.0:3888
>> 2012-11-21 13:40:20,456 INFO
>> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
>> 2012-11-21 13:40:20,458 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
>> =  0, proposed zxid=0x0
>> 2012-11-21 13:40:20,460 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
>> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
>> (n.peerEPoch), LOOKING (my state)
>> 2012-11-21 13:40:20,464 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (1, 0)
>> 2012-11-21 13:40:20,465 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (2, 0)
>> 2012-11-21 13:40:20,663 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (2, 0)
>> 2012-11-21 13:40:20,663 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (1, 0)
>> 2012-11-21 13:40:20,663 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> out: 400
>> 2012-11-21 13:40:21,064 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (2, 0)
>> 2012-11-21 13:40:21,065 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (1, 0)
>> 2012-11-21 13:40:21,065 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> out: 800
>> 2012-11-21 13:40:21,866 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (2, 0)
>> 2012-11-21 13:40:21,866 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (1, 0)
>> 2012-11-21 13:40:21,866 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> out: 1600
>> 2012-11-21 13:40:22,113 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /127.0.0.1:55216
>> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /127.0.0.1:55216 (no session
>> established for client)
>> 2012-11-21 13:40:22,373 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /10.64.155.52:60339
>> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /10.64.155.52:60339 (no session
>> established for client)
>> 2012-11-21 13:40:22,968 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /10.64.155.52:60342
>> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /10.64.155.52:60342 (no session
>> established for client)
>> 2012-11-21 13:40:23,187 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /127.0.0.1:55221
>> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /127.0.0.1:55221 (no session
>> established for client)
>> 2012-11-21 13:40:23,467 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (2, 0)
>> 2012-11-21 13:40:23,467 INFO
>> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
>> identifier, so dropping the connection: (1, 0)
>> 2012-11-21 13:40:23,467 INFO
>> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
>> out: 3200
>> 2012-11-21 13:40:24,116 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /10.64.155.54:35599
>> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Exception causing close of session 0x0 due to java.io.IOException:
>> ZooKeeperServer not running
>> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /10.64.155.54:35599 (no session
>> established for client)
>> 2012-11-21 13:40:24,176 INFO
>> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
>> connection from /127.0.0.1:55225
>> ...
>> 
>> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
>> in /etc/hosts):
>> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
>> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 386178
>> max locked memory       (kbytes, -l) 64
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1024
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) 386178
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> HBase 0.94.2
>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
>> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
>> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
>> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
>> serverside HConnection retries=100
>> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
>> Thread-2
>> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
>> Initializing RPC Metrics with hostName=HMaster, port=60000
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:host.name=hadoop1
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.version=1.6.0_25
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.vendor=Sun Microsystems Inc.
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.home=/home/ngc/jdk1.6.0_25/jre
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.io.tmpdir=/tmp
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:java.compiler=<NA>
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.name=Linux
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.arch=amd64
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:os.version=3.2.0-24-generic
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.name=ngc
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.home=/home/ngc
>> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
>> environment:user.dir=/home/ngc/hbase-0.94.2
>> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
>> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
>> sessionTimeout=180000 watcher=master:60000
>> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
>> socket connection to server /10.64.155.54:2181
>> 2012-11-21 14:46:37,087 INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
>> this process is 12692@hadoop1
>> 2012-11-21 14:46:37,095 WARN
>> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
>> java.lang.SecurityException: Unable to locate a login configuration
>> occurred when trying to find JAAS configuration.
>> 2012-11-21 14:46:37,095 INFO
>> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section 'Client'
>> could not be found. If you are not using SASL, you may ignore this. On the
>> other hand, if you expected SASL to work, please fix your JAAS
>> configuration.
>> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
>> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
>> initiating session
>> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
>> establishment complete on server
>> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
>> 0x33b247f4c380000, negotiated timeout = 40000
>> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> Responder: starting
>> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> listener on 60000: starting
>> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 0 on 60000: starting
>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 1 on 60000: starting
>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 2 on 60000: starting
>> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 3 on 60000: starting
>> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 4 on 60000: starting
>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 5 on 60000: starting
>> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 6 on 60000: starting
>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 7 on 60000: starting
>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 8 on 60000: starting
>> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
>> handler 9 on 60000: starting
>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> Server handler 0 on 60000: starting
>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> Server handler 1 on 60000: starting
>> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
>> Server handler 2 on 60000: starting
>> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=Master,
>> sessionId=hadoop1,60000,1353527196915
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: revision
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: hdfsUser
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: hdfsDate
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: hdfsUrl
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: date
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: hdfsRevision
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: user
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: hdfsVersion
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: url
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
>> MetricsString added: version
>> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 2012-11-21 14:46:37,272 INFO
>> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
>> 2012-11-21 14:46:37,299 INFO
>> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
>> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
>> directory
>> 2012-11-21 14:46:37,320 WARN
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
>> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
>> is not a retry
>> 2012-11-21 14:46:37,321 INFO
>> org.apache.hadoop.hbase.master.ActiveMasterManager:
>> Master=hadoop1,60000,1353527196915
>> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
>> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
>> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
>> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
>> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
>> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
>> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
>> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
>> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
>> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
>> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
>> Unhandled exception. Starting shutdown.
>> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused
>>      at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>      at $Proxy10.getProtocolVersion(Unknown Source)
>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>      at
>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>>      at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>>      at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>>      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>>      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>>      at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
>>      at
>> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
>>      at
>> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
>>    ...
>> 
>> [Message clipped]
> 


RE: Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by "Ratner, Alan S (IS)" <Al...@ngc.com>.
Thanks Mohammad.  I set the clientPort but as I was already using the default value of 2181 it made no difference.

I cannot remove the 127.0.0.1 line from my hosts file.  I connect to my servers via VPN from a Windows laptop using either NX or VNC and both apparently rely on the 127.0.0.1 IP address.  This was not a problem with older versions of HBase (I used to use 0.20.x) so it seems to be something relatively new.  

It seems I have a choice: access my servers remotely or run HBase and these 2 are mutually incompatible.  I think my options are either:
a) revert to an old version of HBase
b) switch to Accumulo, or
c) switch to Cassandra.

Alan 


-----Original Message-----
From: Mohammad Tariq [mailto:dontariq@gmail.com] 
Sent: Wednesday, November 21, 2012 3:11 PM
To: user@hbase.apache.org
Subject: EXT :Re: HBase Issues (perhaps related to 127.0.0.1)

Hello Alan,

    It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
have proper DNS resolution as it plays an important role in proper Hbase
functioning. Also add the "hbase.zookeeper.property.clientPort" property in
your hbase-site.xml file and see if it works for you.

Regards,
    Mohammad Tariq



On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>wrote:

> I'd appreciate any suggestions as to how to get HBase up and running.
>  Right now it dies after a few seconds on all servers.  I am using Hadoop
> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
>
> History: Yesterday I managed to get HBase 0.94.2 working but only after
> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
> clocks).  All was fine until this morning when I realized I could not
> initiate remote log-ins to my servers (using VNC or NX) until I restored
> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
> non-working HBase.
>
> With HBase managing ZK I see the following in the HBase Master and ZK
> logs, respectively:
> 2012-11-21 13:40:22,236 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>
> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
>
> At roughly the same time (clocks not perfectly synchronized) I see this in
> a Regionserver log:
> 2012-11-21 13:40:57,727 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> ...
> 2012-11-21 13:40:57,848 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>
> Logs and configuration follows.
>
> Then I tried managing ZK myself and HBase then fails for seemingly
> different reasons.
> 2012-11-21 14:46:37,320 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> is not a retry
>
> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> connection exception: java.net.ConnectException: Connection refused
>
> Both HMaster error logs (self-managed and me-managed ZK) mention the
> 127.0.0.1 IP address instead of referring to the server by its name
> (hadoop1) or its true IP address or simply as localhost.
>
> So, start-hbase.sh works OK (HB managing ZK):
> ngc@hadoop1:~/hbase-0.94.2$ bin/start-hbase.sh
> hadoop1: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
> hadoop2: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
> hadoop3: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
> starting master, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
> hadoop2: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
> hadoop6: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
> hadoop3: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
> hadoop5: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
> hadoop4: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
>
> I have in hbase-site.xml:
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>       <property>
>             <name>hbase.master</name>
>             <value>hadoop1:60000</value>
>         </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://hadoop1:9000/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/tmp/zookeeper_data</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>hadoop1,hadoop2,hadoop3</value>
> </property>
>
> I have in hbase-env.sh:
> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
> export HBASE_HEAPSIZE=2000
> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
> export HBASE_MANAGES_ZK=true
>
> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> serverside HConnection retries=100
> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HMaster, port=60000
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop1
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.2.0-24-generic
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=master:60000
> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /127.0.0.1:2181
> 2012-11-21 13:40:22,099 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 742@hadoop1
> 2012-11-21 13:40:22,106 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,106 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,236 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 2000ms before retry #1...
> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.53:2181
> 2012-11-21 13:40:22,411 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,411 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 13:40:22,747 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,747 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.52:2181
> 2012-11-21 13:40:22,967 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,967 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:24,176 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:24,176 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:24,277 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 4000ms before retry #2...
> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:24,767 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:24,767 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:25,757 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:25,757 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:26,597 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:26,597 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:27,775 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:27,775 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:28,318 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:28,318 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:28,419 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 8000ms before retry #3...
> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:29,106 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:29,106 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:30,039 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:30,039 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:31,283 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:31,283 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:32,143 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:32,143 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:32,480 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:32,480 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:33,295 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:33,295 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:34,962 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:34,962 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:35,661 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:35,661 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:36,523 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:36,523 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:36,625 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:36,625 ERROR
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 3 retries
> 2012-11-21 13:40:36,626 ERROR
> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
> java.lang.RuntimeException: Failed construction of Master: class
> org.apache.hadoop.hbase.master.HMaster
>       at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
>       at
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
>       at
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>       at
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
>       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
>       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>       at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>       at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>       at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
>       ... 5 more
>
>
> From server hadoop2 (running regionserver, ZK, DN, TT)
> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 193105
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 193105
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 13:40:57,172 INFO
> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
> 2012-11-21 13:40:57,172 INFO
> org.apache.hadoop.hbase.util.ServerCommandLine:
> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
> -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
> -Dhbase.security.logger=INFO,DRFAS]
> 2012-11-21 13:40:57,222 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
> HConnection retries=100
> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
> Allocating LruBlockCache with maximum size 493.8m
> 2012-11-21 13:40:57,699 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
> thread: Shutdownhook:regionserver60020
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop2.aj.c2fse.northgrum.com
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.0.0-12-generic
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=regionserver:60020
> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 13:40:57,719 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 12835@hadoop2
> 2012-11-21 13:40:57,727 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:57,727 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:57,848 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 2000ms before retry #1...
> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.53:2181
> 2012-11-21 13:40:58,283 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:58,283 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /127.0.0.1:2181
> 2012-11-21 13:40:58,726 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:58,726 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.52:2181
> 2012-11-21 13:40:59,368 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:59,368 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:00,660 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:00,660 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:00,761 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 4000ms before retry #2...
> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:01,422 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:01,422 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:02,370 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:02,370 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:02,627 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:02,627 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:03,968 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:03,969 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:04,733 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:04,733 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:04,835 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 8000ms before retry #3...
> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:05,741 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:05,741 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:06,192 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:06,192 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:07,313 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:07,313 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:08,273 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:08,273 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:09,090 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:09,090 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:09,711 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:09,711 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:11,121 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:11,121 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:11,600 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:11,600 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:12,320 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:12,320 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:12,861 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:12,861 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,962 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:12,962 ERROR
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 3 retries
> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
> regionserver:60020 Unable to set watcher on znode /hbase/master
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,966 ERROR
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
> Received unexpected KeeperException, re-throwing exception
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,966 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
> during initialization, aborting
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,969 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> loaded coprocessors are: []
> 2012-11-21 13:41:12,969 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
> exception during initialization, aborting
> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:14,834 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:14,834 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:15,335 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:15,335 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60020
> 2012-11-21 13:41:15,975 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
> failed.  Hence aborting RS.
> java.io.IOException: Received the shutdown message while waiting.
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:15,976 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> loaded coprocessors are: []
> 2012-11-21 13:41:15,976 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
> of RS failed.  Hence aborting RS.
> 2012-11-21 13:41:15,978 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
> MXBean
> 2012-11-21 13:41:15,980 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
> 2012-11-21 13:41:15,980 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
> 2012-11-21 13:41:15,981 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
> hook thread.
> 2012-11-21 13:41:15,981 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
>
> Finally, in the zookeeper log from hadoop1 I have:
> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:20,279 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
> quorums
> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase
> 2012-11-21 13:40:20,336 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 2012-11-21 13:40:20,356 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
> 0.0.0.0/0.0.0.0:2181
> 2012-11-21 13:40:20,378 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
> 180000
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
> 2012-11-21 13:40:20,395 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
> Creating with a reasonable default of 0. This should only happen when you
> are upgrading your installation
> 2012-11-21 13:40:20,442 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
> 0.0.0.0/0.0.0.0:3888
> 2012-11-21 13:40:20,456 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
> 2012-11-21 13:40:20,458 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
> =  0, proposed zxid=0x0
> 2012-11-21 13:40:20,460 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
> (n.peerEPoch), LOOKING (my state)
> 2012-11-21 13:40:20,464 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:20,465 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 400
> 2012-11-21 13:40:21,064 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:21,065 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:21,065 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 800
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 1600
> 2012-11-21 13:40:22,113 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55216
> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /127.0.0.1:55216 (no session
> established for client)
> 2012-11-21 13:40:22,373 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.52:60339
> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.52:60339 (no session
> established for client)
> 2012-11-21 13:40:22,968 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.52:60342
> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.52:60342 (no session
> established for client)
> 2012-11-21 13:40:23,187 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55221
> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /127.0.0.1:55221 (no session
> established for client)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 3200
> 2012-11-21 13:40:24,116 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.54:35599
> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.54:35599 (no session
> established for client)
> 2012-11-21 13:40:24,176 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55225
> ...
>
> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
> in /etc/hosts):
> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> serverside HConnection retries=100
> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HMaster, port=60000
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop1
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.2.0-24-generic
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=master:60000
> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 14:46:37,087 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 12692@hadoop1
> 2012-11-21 14:46:37,095 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 14:46:37,095 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
> establishment complete on server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
> 0x33b247f4c380000, negotiated timeout = 40000
> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> Responder: starting
> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> listener on 60000: starting
> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 0 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 1 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 2 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 3 on 60000: starting
> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 4 on 60000: starting
> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 5 on 60000: starting
> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 6 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 7 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 8 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 9 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 0 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 1 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 2 on 60000: starting
> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=Master,
> sessionId=hadoop1,60000,1353527196915
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: revision
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsUser
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsDate
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsUrl
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: date
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsRevision
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: user
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsVersion
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: url
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: version
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-11-21 14:46:37,272 INFO
> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
> 2012-11-21 14:46:37,299 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
> directory
> 2012-11-21 14:46:37,320 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> is not a retry
> 2012-11-21 14:46:37,321 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager:
> Master=hadoop1,60000,1353527196915
> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> connection exception: java.net.ConnectException: Connection refused
>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>       at $Proxy10.getProtocolVersion(Unknown Source)
>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>       at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>       at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>       at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>       at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
>       at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
>       at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
>     ...
>
> [Message clipped]

Re: HBase Issues (perhaps related to 127.0.0.1)

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Alan,

    It's better to keep 127.0.0.1 out of your /etc/hosts and make sure you
have proper DNS resolution as it plays an important role in proper Hbase
functioning. Also add the "hbase.zookeeper.property.clientPort" property in
your hbase-site.xml file and see if it works for you.

Regards,
    Mohammad Tariq



On Thu, Nov 22, 2012 at 1:31 AM, Ratner, Alan S (IS) <Al...@ngc.com>wrote:

> I'd appreciate any suggestions as to how to get HBase up and running.
>  Right now it dies after a few seconds on all servers.  I am using Hadoop
> 1.0.4, ZooKeeper 3.4.4 and HBase 0.94.2 on Ubuntu.
>
> History: Yesterday I managed to get HBase 0.94.2 working but only after
> removing the 127.0.0.1 line from my /etc/hosts file (and synchronizing my
> clocks).  All was fine until this morning when I realized I could not
> initiate remote log-ins to my servers (using VNC or NX) until I restored
> the 127.0.0.1 line in /etc/hosts.  With that restored I am back to a
> non-working HBase.
>
> With HBase managing ZK I see the following in the HBase Master and ZK
> logs, respectively:
> 2012-11-21 13:40:22,236 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>
> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
>
> At roughly the same time (clocks not perfectly synchronized) I see this in
> a Regionserver log:
> 2012-11-21 13:40:57,727 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> ...
> 2012-11-21 13:40:57,848 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>
> Logs and configuration follows.
>
> Then I tried managing ZK myself and HBase then fails for seemingly
> different reasons.
> 2012-11-21 14:46:37,320 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> is not a retry
>
> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> connection exception: java.net.ConnectException: Connection refused
>
> Both HMaster error logs (self-managed and me-managed ZK) mention the
> 127.0.0.1 IP address instead of referring to the server by its name
> (hadoop1) or its true IP address or simply as localhost.
>
> So, start-hbase.sh works OK (HB managing ZK):
> ngc@hadoop1:~/hbase-0.94.2$ bin/start-hbase.sh
> hadoop1: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop1.out
> hadoop2: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop2.out
> hadoop3: starting zookeeper, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-zookeeper-hadoop3.out
> starting master, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-master-hadoop1.out
> hadoop2: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop2.out
> hadoop6: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop6.out
> hadoop3: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop3.out
> hadoop5: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop5.out
> hadoop4: starting regionserver, logging to
> /tmp/hbase-ngc/logs/hbase-ngc-regionserver-hadoop4.out
>
> I have in hbase-site.xml:
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>       <property>
>             <name>hbase.master</name>
>             <value>hadoop1:60000</value>
>         </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://hadoop1:9000/hbase</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/tmp/zookeeper_data</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>hadoop1,hadoop2,hadoop3</value>
> </property>
>
> I have in hbase-env.sh:
> export JAVA_HOME=/home/ngc/jdk1.6.0_25/
> export HBASE_CLASSPATH=/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4
> export HBASE_HEAPSIZE=2000
> export HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError
> -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
> export HBASE_LOG_DIR=/tmp/hbase-ngc/logs
> export HBASE_MANAGES_ZK=true
>
> From server hadoop1 (running HMaster, ZK, NN, SNN, JT)
> Wed Nov 21 13:40:20 EST 2012 Starting master on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 13:40:21,410 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 13:40:21,558 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> serverside HConnection retries=100
> 2012-11-21 13:40:21,823 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,826 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,833 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,836 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,839 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,846 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,849 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 13:40:21,863 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HMaster, port=60000
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop1
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 13:40:22,078 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.2.0-24-generic
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 13:40:22,079 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 13:40:22,080 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=master:60000
> 2012-11-21 13:40:22,097 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /127.0.0.1:2181
> 2012-11-21 13:40:22,099 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 742@hadoop1
> 2012-11-21 13:40:22,106 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,106 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,110 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,236 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:22,236 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 2000ms before retry #1...
> 2012-11-21 13:40:22,411 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.53:2181
> 2012-11-21 13:40:22,411 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,411 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,412 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:22,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,746 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 13:40:22,747 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,747 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,747 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:22,748 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:22,967 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.52:2181
> 2012-11-21 13:40:22,967 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:22,967 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:24,175 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:24,176 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:24,176 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:24,176 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:24,177 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:24,277 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:24,277 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 4000ms before retry #2...
> 2012-11-21 13:40:24,766 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:24,767 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:24,767 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:24,767 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:24,768 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:25,756 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:25,757 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:25,757 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:25,757 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:26,597 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:26,597 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:26,597 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:26,598 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:27,775 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:27,775 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:27,775 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:27,776 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:28,317 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:28,318 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:28,318 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:28,318 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:28,319 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:28,419 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:28,419 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 8000ms before retry #3...
> 2012-11-21 13:40:29,106 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:29,106 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:29,106 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:29,107 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:30,039 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:30,039 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:30,039 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:30,040 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:31,283 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:31,283 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:31,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:31,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:32,142 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:32,143 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:32,143 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:32,143 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:32,144 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:32,479 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:32,480 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:32,480 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:32,480 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:32,481 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:33,294 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:40:33,295 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:33,295 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:33,296 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:40:34,962 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:34,962 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:34,962 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:34,963 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:35,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:40:35,661 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:35,661 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:35,661 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:35,662 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:36,522 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:40:36,523 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:36,523 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:36,523 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:36,524 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:36,625 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
> 2012-11-21 13:40:36,625 ERROR
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 3 retries
> 2012-11-21 13:40:36,626 ERROR
> org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
> java.lang.RuntimeException: Failed construction of Master: class
> org.apache.hadoop.hbase.master.HMaster
>       at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1792)
>       at
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:146)
>       at
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>       at
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
>       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1806)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1049)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:193)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:904)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:159)
>       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:282)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>       at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>       at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>       at
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1787)
>       ... 5 more
>
>
> From server hadoop2 (running regionserver, ZK, DN, TT)
> Wed Nov 21 13:40:56 EST 2012 Starting regionserver on hadoop2
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 193105
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 193105
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 13:40:57,034 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 13:40:57,172 INFO
> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java HotSpot(TM)
> 64-Bit Server VM, vmVendor=Sun Microsystems Inc., vmVersion=20.0-b11
> 2012-11-21 13:40:57,172 INFO
> org.apache.hadoop.hbase.util.ServerCommandLine:
> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx2000m,
> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError,
> -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode,
> -Dhbase.log.dir=/tmp/hbase-ngc/logs,
> -Dhbase.log.file=hbase-ngc-regionserver-hadoop2.log,
> -Dhbase.home.dir=/home/ngc/hbase-0.94.2/bin/.., -Dhbase.id.str=ngc,
> -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64,
> -Dhbase.security.logger=INFO,DRFAS]
> 2012-11-21 13:40:57,222 DEBUG
> org.apache.hadoop.hbase.regionserver.HRegionServer: Set serverside
> HConnection retries=100
> 2012-11-21 13:40:57,469 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,471 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,473 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,475 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,477 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,480 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,482 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,484 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,486 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,488 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-1
> 2012-11-21 13:40:57,500 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 2012-11-21 13:40:57,654 INFO org.apache.hadoop.hbase.io.hfile.CacheConfig:
> Allocating LruBlockCache with maximum size 493.8m
> 2012-11-21 13:40:57,699 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown hook
> thread: Shutdownhook:regionserver60020
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop2.aj.c2fse.northgrum.com
> 2012-11-21 13:40:57,701 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.0.0-12-generic
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 13:40:57,702 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 13:40:57,703 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=regionserver:60020
> 2012-11-21 13:40:57,718 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 13:40:57,719 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 12835@hadoop2
> 2012-11-21 13:40:57,727 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:57,727 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:57,731 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:40:57,733 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:57,848 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:40:57,849 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 2000ms before retry #1...
> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.53:2181
> 2012-11-21 13:40:58,283 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:58,283 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:58,283 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:40:58,284 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /127.0.0.1:2181
> 2012-11-21 13:40:58,726 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:58,726 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:58,726 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:40:58,727 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:40:59,367 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.52:2181
> 2012-11-21 13:40:59,368 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:40:59,368 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:40:59,368 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:40:59,369 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:00,660 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:00,660 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:00,660 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:00,661 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:00,761 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:00,762 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 4000ms before retry #2...
> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:01,422 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:01,422 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:01,422 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:01,423 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:02,369 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:02,370 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:02,370 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:02,370 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:02,627 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:02,627 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:02,627 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:02,628 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:03,968 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:03,968 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:03,969 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:03,969 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:04,733 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:04,733 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:04,733 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:04,734 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:04,835 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:04,835 INFO org.apache.hadoop.hbase.util.RetryCounter:
> Sleeping 8000ms before retry #3...
> 2012-11-21 13:41:05,741 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:05,741 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:05,741 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:05,742 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:06,192 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:06,192 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:06,192 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:06,193 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:07,313 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:07,313 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:07,313 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:07,314 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:08,272 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:08,273 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:08,273 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:08,273 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:09,090 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:09,090 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:09,090 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:09,091 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:09,710 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:09,711 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:09,711 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:09,711 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:09,712 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:11,120 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:11,121 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:11,121 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:11,121 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:11,122 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:11,599 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:11,600 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:11,600 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:11,600 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,320 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server hadoop1/127.0.0.1:2181
> 2012-11-21 13:41:12,320 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:12,320 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1/127.0.0.1:2181, initiating session
> 2012-11-21 13:41:12,321 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,860 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181
> 2012-11-21 13:41:12,861 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:12,861 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:12,861 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop1.aj.c2fse.northgrum.com/10.64.155.52:2181,
> initiating session
> 2012-11-21 13:41:12,862 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:12,962 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper exception:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2012-11-21 13:41:12,962 ERROR
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 3 retries
> 2012-11-21 13:41:12,963 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil:
> regionserver:60020 Unable to set watcher on znode /hbase/master
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,966 ERROR
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:60020
> Received unexpected KeeperException, re-throwing exception
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,966 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Unexpected exception
> during initialization, aborting
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/master
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>       at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>       at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
>       at
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
>       at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
>       at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:597)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:12,969 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> loaded coprocessors are: []
> 2012-11-21 13:41:12,969 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unexpected
> exception during initialization, aborting
> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181
> 2012-11-21 13:41:14,834 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:14,834 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:14,834 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 13:41:14,835 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server
> hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181
> 2012-11-21 13:41:15,335 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 13:41:15,335 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 13:41:15,335 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop2.aj.c2fse.northgrum.com/10.64.155.53:2181,
> initiating session
> 2012-11-21 13:41:15,336 INFO org.apache.zookeeper.ClientCnxn: Unable to
> read additional data from server sessionid 0x0, likely server has closed
> socket, closing socket connection and attempting reconnect
> 2012-11-21 13:41:15,975 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60020
> 2012-11-21 13:41:15,975 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
> hadoop2.aj.c2fse.northgrum.com,60020,1353523257570: Initialization of RS
> failed.  Hence aborting RS.
> java.io.IOException: Received the shutdown message while waiting.
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
>       at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
>       at java.lang.Thread.run(Thread.java:662)
> 2012-11-21 13:41:15,976 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort:
> loaded coprocessors are: []
> 2012-11-21 13:41:15,976 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization
> of RS failed.  Hence aborting RS.
> 2012-11-21 13:41:15,978 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer
> MXBean
> 2012-11-21 13:41:15,980 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting;
> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
> 2012-11-21 13:41:15,980 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
> 2012-11-21 13:41:15,981 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown
> hook thread.
> 2012-11-21 13:41:15,981 INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
>
> Finally, in the zookeeper log from hadoop1 I have:
> Wed Nov 21 13:40:19 EST 2012 Starting zookeeper on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 13:40:20,279 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority
> quorums
> 2012-11-21 13:40:20,334 DEBUG org.apache.hadoop.hbase.util.Bytes:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.util.Bytes
> 2012-11-21 13:40:20,335 DEBUG org.apache.hadoop.hbase.util.VersionInfo:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.util.VersionInfo
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.zookeeper.ZKConfig:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.zookeeper.ZKConfig
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase.HBaseConfiguration:
> preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase.HBaseConfiguration
> 2012-11-21 13:40:20,336 DEBUG org.apache.hadoop.hbase: preRegister called.
> Server=com.sun.jmx.mbeanserver.JmxMBeanServer@538f1d7e,
> name=log4j:logger=org.apache.hadoop.hbase
> 2012-11-21 13:40:20,336 INFO
> org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer
> 2012-11-21 13:40:20,356 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: binding to port
> 0.0.0.0/0.0.0.0:2181
> 2012-11-21 13:40:20,378 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: tickTime set to 3000
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: minSessionTimeout set to -1
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: maxSessionTimeout set to
> 180000
> 2012-11-21 13:40:20,379 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: initLimit set to 10
> 2012-11-21 13:40:20,395 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: acceptedEpoch not found!
> Creating with a reasonable default of 0. This should only happen when you
> are upgrading your installation
> 2012-11-21 13:40:20,442 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: My election bind port:
> 0.0.0.0/0.0.0.0:3888
> 2012-11-21 13:40:20,456 INFO
> org.apache.zookeeper.server.quorum.QuorumPeer: LOOKING
> 2012-11-21 13:40:20,458 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: New election. My id
> =  0, proposed zxid=0x0
> 2012-11-21 13:40:20,460 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification: 0
> (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0
> (n.peerEPoch), LOOKING (my state)
> 2012-11-21 13:40:20,464 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:20,465 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:20,663 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 400
> 2012-11-21 13:40:21,064 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:21,065 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:21,065 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 800
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:21,866 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 1600
> 2012-11-21 13:40:22,113 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55216
> 2012-11-21 13:40:22,122 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,122 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /127.0.0.1:55216 (no session
> established for client)
> 2012-11-21 13:40:22,373 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.52:60339
> 2012-11-21 13:40:22,374 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,374 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.52:60339 (no session
> established for client)
> 2012-11-21 13:40:22,968 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.52:60342
> 2012-11-21 13:40:22,968 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:22,968 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.52:60342 (no session
> established for client)
> 2012-11-21 13:40:23,187 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55221
> 2012-11-21 13:40:23,188 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:23,188 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /127.0.0.1:55221 (no session
> established for client)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (2, 0)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.QuorumCnxManager: Have smaller server
> identifier, so dropping the connection: (1, 0)
> 2012-11-21 13:40:23,467 INFO
> org.apache.zookeeper.server.quorum.FastLeaderElection: Notification time
> out: 3200
> 2012-11-21 13:40:24,116 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /10.64.155.54:35599
> 2012-11-21 13:40:24,117 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Exception causing close of session 0x0 due to java.io.IOException:
> ZooKeeperServer not running
> 2012-11-21 13:40:24,117 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.64.155.54:35599 (no session
> established for client)
> 2012-11-21 13:40:24,176 INFO
> org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
> connection from /127.0.0.1:55225
> ...
>
> Here are the logs when I manage ZK myself (showing the 127.0.0.1 problem
> in /etc/hosts):
> Wed Nov 21 14:46:21 EST 2012 Stopping hbase (via master)
> Wed Nov 21 14:46:35 EST 2012 Starting master on hadoop1
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 386178
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 386178
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> HBase 0.94.2
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Subversion https://svn.apache.org/repos/asf/hbase/branches/0.94 -r 1395367
> 2012-11-21 14:46:36,405 INFO org.apache.hadoop.hbase.util.VersionInfo:
> Compiled by jenkins on Sun Oct  7 19:11:01 UTC 2012
> 2012-11-21 14:46:36,555 DEBUG org.apache.hadoop.hbase.master.HMaster: Set
> serverside HConnection retries=100
> 2012-11-21 14:46:36,822 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,825 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,829 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,832 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,835 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,838 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,842 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,845 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,848 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,851 INFO org.apache.hadoop.ipc.HBaseServer: Starting
> Thread-2
> 2012-11-21 14:46:36,862 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HMaster, port=60000
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=hadoop1
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_25
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.vendor=Sun Microsystems Inc.
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/home/ngc/jdk1.6.0_25/jre
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/ngc/hbase-0.94.2/conf:/home/ngc/jdk1.6.0_25//lib/tools.jar:/home/ngc/hbase-0.94.2/bin/..:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2.jar:/home/ngc/hbase-0.94.2/bin/../hbase-0.94.2-tests.jar:/home/ngc/hbase-0.94.2/bin/../lib/activation-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/asm-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/avro-ipc-1.5.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-cli-1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-codec-1.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-collections-3.2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-configuration-1.6.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-digester-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-el-1.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-httpclient-3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-io-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-lang-2.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-logging-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-math-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/commons-net-1.4.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/core-3.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/guava-11.0.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/hadoop-core-1.0.4.jar:/home/ngc/hbase-0.94.2/bin/../lib/high-scale-lib-1.1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpclient-4.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/httpcore-4.1.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jackson-xc-1.8.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jamon-runtime-2.3.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-compiler-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jasper-runtime-5.5.23.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-api-2.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-core-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-json-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jersey-server-1.8.jar:/home/ngc/hbase-0.94.2/bin/../lib/jettison-1.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jetty-util-6.1.26.jar:/home/ngc/hbase-0.94.2/bin/../lib/jruby-complete-1.6.5.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/jsr305-1.3.9.jar:/home/ngc/hbase-0.94.2/bin/../lib/junit-4.10-HBASE-1.jar:/home/ngc/hbase-0.94.2/bin/../lib/libthrift-0.8.0.jar:/home/ngc/hbase-0.94.2/bin/../lib/log4j-1.2.16.jar:/home/ngc/hbase-0.94.2/bin/../lib/metrics-core-2.1.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/netty-3.2.4.Final.jar:/home/ngc/hbase-0.94.2/bin/../lib/protobuf-java-2.4.0a.jar:/home/ngc/hbase-0.94.2/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-api-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hbase-0.94.2/bin/../lib/snappy-java-1.0.3.2.jar:/home/ngc/hbase-0.94.2/bin/../lib/stax-api-1.0.1.jar:/home/ngc/hbase-0.94.2/bin/../lib/velocity-1.7.jar:/home/ngc/hbase-0.94.2/bin/../lib/xmlenc-0.52.jar:/home/ngc/hbase-0.94.2/bin/../lib/zookeeper-3.4.3.jar:/home/zookeeper-3.4.4/conf:/home/zookeeper-3.4.4:/home/ngc/hadoop-1.0.4/libexec/../conf:/home/ngc/jdk1.6.0_25/lib/tools.jar:/home/ngc/hadoop-1.0.4/libexec/..:/home/ngc/hadoop-1.0.4/libexec/../hadoop-core-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/asm-3.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjrt-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/aspectjtools-1.6.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-1.7.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-cli-1.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-codec-1.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-collections-3.2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-configuration-1.6.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-daemon-1.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-digester-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-el-1.0.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-httpclient-3.0.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-io-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-lang-2.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-1.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-logging-api-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-math-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/commons-net-1.4.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/core-3.1.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-capacity-scheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-fairscheduler-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hadoop-thriftfs-1.0.4.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/hsqldb-1.8.0.10.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-compiler-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jasper-runtime-5.5.12.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jdeb-0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-core-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-json-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jersey-server-1.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jets3t-0.6.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jetty-util-6.1.26.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsch-0.1.42.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/junit-4.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/kfs-0.2.2.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/log4j-1.2.15.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/mockito-all-1.8.5.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/oro-2.0.8.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/servlet-api-2.5-20081211.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-api-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/xmlenc-0.52.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/ngc/hadoop-1.0.4/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/ngc/hadoop-1.0.4/libexec/../lib/native/Linux-amd64-64:/home/ngc/hbase-0.94.2/bin/../lib/native/Linux-amd64-64
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=3.2.0-24-generic
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.name=ngc
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/ngc
> 2012-11-21 14:46:37,071 INFO org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/ngc/hbase-0.94.2
> 2012-11-21 14:46:37,072 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection, connectString=hadoop2:2181,hadoop1:2181,hadoop3:2181
> sessionTimeout=180000 watcher=master:60000
> 2012-11-21 14:46:37,087 INFO org.apache.zookeeper.ClientCnxn: Opening
> socket connection to server /10.64.155.54:2181
> 2012-11-21 14:46:37,087 INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
> this process is 12692@hadoop1
> 2012-11-21 14:46:37,095 WARN
> org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException:
> java.lang.SecurityException: Unable to locate a login configuration
> occurred when trying to find JAAS configuration.
> 2012-11-21 14:46:37,095 INFO
> org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section 'Client'
> could not be found. If you are not using SASL, you may ignore this. On the
> other hand, if you expected SASL to work, please fix your JAAS
> configuration.
> 2012-11-21 14:46:37,098 INFO org.apache.zookeeper.ClientCnxn: Socket
> connection established to hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181,
> initiating session
> 2012-11-21 14:46:37,131 INFO org.apache.zookeeper.ClientCnxn: Session
> establishment complete on server
> hadoop3.aj.c2fse.northgrum.com/10.64.155.54:2181, sessionid =
> 0x33b247f4c380000, negotiated timeout = 40000
> 2012-11-21 14:46:37,224 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> Responder: starting
> 2012-11-21 14:46:37,225 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> listener on 60000: starting
> 2012-11-21 14:46:37,240 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 0 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 1 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 2 on 60000: starting
> 2012-11-21 14:46:37,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 3 on 60000: starting
> 2012-11-21 14:46:37,242 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 4 on 60000: starting
> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 5 on 60000: starting
> 2012-11-21 14:46:37,246 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 6 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 7 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 8 on 60000: starting
> 2012-11-21 14:46:37,247 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 9 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 0 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 1 on 60000: starting
> 2012-11-21 14:46:37,248 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC
> Server handler 2 on 60000: starting
> 2012-11-21 14:46:37,253 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=Master,
> sessionId=hadoop1,60000,1353527196915
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: revision
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsUser
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsDate
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsUrl
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: date
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsRevision
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: user
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: hdfsVersion
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: url
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics:
> MetricsString added: version
> 2012-11-21 14:46:37,270 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-11-21 14:46:37,272 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-11-21 14:46:37,272 INFO
> org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
> 2012-11-21 14:46:37,299 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for
> /hbase/backup-masters/hadoop1,60000,1353527196915 from backup master
> directory
> 2012-11-21 14:46:37,320 WARN
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
> /hbase/backup-masters/hadoop1,60000,1353527196915 already deleted, and this
> is not a retry
> 2012-11-21 14:46:37,321 INFO
> org.apache.hadoop.hbase.master.ActiveMasterManager:
> Master=hadoop1,60000,1353527196915
> 2012-11-21 14:46:38,475 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 0 time(s).
> 2012-11-21 14:46:39,476 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 1 time(s).
> 2012-11-21 14:46:40,477 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 2 time(s).
> 2012-11-21 14:46:41,477 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 3 time(s).
> 2012-11-21 14:46:42,478 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 4 time(s).
> 2012-11-21 14:46:43,478 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 5 time(s).
> 2012-11-21 14:46:44,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 6 time(s).
> 2012-11-21 14:46:45,479 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 7 time(s).
> 2012-11-21 14:46:46,480 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 8 time(s).
> 2012-11-21 14:46:47,480 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: hadoop1/127.0.0.1:9000. Already tried 9 time(s).
> 2012-11-21 14:46:47,483 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> java.net.ConnectException: Call to hadoop1/127.0.0.1:9000 failed on
> connection exception: java.net.ConnectException: Connection refused
>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>       at $Proxy10.getProtocolVersion(Unknown Source)
>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>       at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>       at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>       at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>       at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:561)
>       at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
>       at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:482)
>     ...
>
> [Message clipped]