You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "fw0037 (JIRA)" <ji...@apache.org> on 2018/06/20 12:22:00 UTC

[jira] [Created] (HBASE-20758) Hbase启动报错

fw0037 created HBASE-20758:
------------------------------

             Summary: Hbase启动报错
                 Key: HBASE-20758
                 URL: https://issues.apache.org/jira/browse/HBASE-20758
             Project: HBase
          Issue Type: Bug
    Affects Versions: 2.0.0
         Environment: *hdfs-site.xml:*

<configuration>
<property>
 <name>dfs.datanode.data.dir</name>
 <value>/hadoop/data</value>
</property>
<property>
 <name>dfs.nameservices</name>
 <value>cengine</value>
</property>
<property>
 <name>dfs.ha.namenodes.cengine</name>
 <value>nn1,nn2</value>
</property>
<property>
 <name>dfs.namenode.rpc-address.cengine.nn1</name>
 <value>namenode:8020</value>
</property>
<property>
 <name>dfs.namenode.rpc-address.cengine.nn2</name>
 <value>secnamenode:8020</value>
</property>
<property>
 <name>dfs.namenode.http-address.cengine.nn1</name>
 <value>namenode:50070</value>
</property>
<property>
 <name>dfs.namenode.http-address.cengine.nn2</name>
 <value>secnamenode:50070</value>
</property>
<property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://datanode1:8485;datanode2:8485;datanode3:8485/cengine</value>
</property>
<property>
 <name>dfs.client.failover.proxy.provider.cengine</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
 <name>dfs.ha.fencing.methods</name>
 <value>sshfence</value>
</property>
<property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/root/.ssh/id_rsa</value>
</property>
<property>
 <name>dfs.journalnode.edits.dir</name>
 <value>/path/to/journal/node/local/data</value>
</property>
<property>
 <name>dfs.ha.automatic-failover.enabled</name>
 <value>true</value>
 </property>
</configuration>

 

*hbase-site.xml:*

<configuration>
 <property>
 <name>hbase.rootdir</name>
 <value>hdfs://cengine/hbase</value>
 </property>
 <property>
 <name>hbase.cluster.distributed</name>
 <value>true</value>
 </property>
 <property>
 <name>hbase.zookeeper.quorum</name>
 <value>zknode:2181,zknode:2182,zknode:2183</value>
 </property>
 <property> 
 <name>hbase.master.info.port</name> 
 <value>60010</value> 
 </property>
</configuration>

*regionservers:*

datanode1
datanode2
datanode3

*/etc/hosts:*
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 datanode-0001
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 datanode-0001

192.168.0.54 zknode
192.168.0.117 namenode
192.168.0.86 secnamenode
192.168.0.142 datanode1
192.168.0.140 datanode2
192.168.0.113 datanode3
            Reporter: fw0037


*本地的hdfs版本是hadoop 2.7.6, hbase版本是hbase 2.0.0。*

*hdfs运行使用正常,hbase启动后报错。*

 

*启动start-hbase.sh前,已执行hadoop dfsadmin -safemode leave,但是启动日志hbase-root-master-datanode1.log 中报错:*

Tue Jun 19 20:36:18 CST 2018 Stopping hbase (via master)
2018-06-19 20:39:44,461 INFO [main] master.HMaster: STARTING service HMaster
2018-06-19 20:39:44,463 INFO [main] util.VersionInfo: HBase 2.0.0
2018-06-19 20:39:44,463 INFO [main] util.VersionInfo: Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
2018-06-19 20:39:44,463 INFO [main] util.VersionInfo: Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
2018-06-19 20:39:44,463 INFO [main] util.VersionInfo: From source with checksum a59e806496ef216732e730c746bbe5ac
2018-06-19 20:39:44,965 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=clean znode for master connecting to ZooKeeper ensemble=zknode:2181,zknode:2182,zknode:2183
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=datanode1
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_131
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_131/jre
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: 9.13.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6-tests.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6.jar:/home/centos/hadoop-2.7.6/contrib/capacity-scheduler/*.jar:/home/centos/hadoop-2.7.6/etc/hadoop
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/centos/hadoop-2.7.6/lib/native
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-123.el7.x86_64
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root
2018-06-19 20:39:44,992 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/centos/hbase-2.0.0/conf
2018-06-19 20:39:44,993 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=zknode:2181,zknode:2182,zknode:2183 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@65fb9ffc
2018-06-19 20:39:45,015 INFO [main-SendThread(zknode:2182)] zookeeper.ClientCnxn: Opening socket connection to server zknode/192.168.0.54:2182. Will not attempt to authenticate using SASL (unknown error)
2018-06-19 20:39:45,023 INFO [main-SendThread(zknode:2182)] zookeeper.ClientCnxn: Socket connection established to zknode/192.168.0.54:2182, initiating session
2018-06-19 20:39:45,040 INFO [main-SendThread(zknode:2182)] zookeeper.ClientCnxn: Session establishment complete on server zknode/192.168.0.54:2182, sessionid = 0x26417e2deac0004, negotiated timeout = 40000
2018-06-19 20:39:45,144 INFO [main] zookeeper.ZooKeeper: Session: 0x26417e2deac0004 closed
2018-06-19 20:39:45,146 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x26417e2deac0004
Tue Jun 19 20:40:54 CST 2018 Starting master on datanode1
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30203
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30203
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2018-06-19 20:40:55,690 INFO [main] master.HMaster: STARTING service HMaster
2018-06-19 20:40:55,692 INFO [main] util.VersionInfo: HBase 2.0.0
2018-06-19 20:40:55,692 INFO [main] util.VersionInfo: Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
2018-06-19 20:40:55,692 INFO [main] util.VersionInfo: Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
2018-06-19 20:40:55,692 INFO [main] util.VersionInfo: From source with checksum a59e806496ef216732e730c746bbe5ac
2018-06-19 20:40:56,149 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-root
2018-06-19 20:40:56,150 INFO [main] util.ServerCommandLine: hbase.rootdir: hdfs://cengine/hbase
2018-06-19 20:40:56,150 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2018-06-19 20:40:56,150 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: zknode:2181,zknode:2182,zknode:2183
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/java/jdk1.8.0_131/bin:/home/centos/hadoop-2.7.6/bin:/home/centos/hadoop-2.7.6/sbin:/home/centos/hbase-2.0.0/bin
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_PID_DIR=/hadoop/pid
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:QT_GRAPHICSSYSTEM=native
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/root
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:LD_LIBRARY_PATH=:/home/centos/hadoop-2.7.6/lib/native
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:LOGNAME=root
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:PWD=/home/centos/hbase-2.0.0/conf
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:QT_GRAPHICSSYSTEM_CHECKED=1
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:SELINUX_USE_CURRENT_RANGE=
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HADOOP_HOME=/home/centos/hadoop-2.7.6
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/home/centos/hbase-2.0.0/logs -Dhbase.log.file=hbase-root-master-datanode1.log -Dhbase.home.dir=/home/centos/hbase-2.0.0 -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Djava.library.path=/home/centos/hadoop-2.7.6/lib/native -Dhbase.security.logger=INFO,RFAS
2018-06-19 20:40:56,151 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: 1:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:SHLVL=4
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:QT_PLUGIN_PATH=/usr/lib64/kde4/plugins:/usr/lib/kde4/plugins
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-root-master-datanode1.log
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HISTSIZE=1000
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/usr/java/jdk1.8.0_131
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:TERM=xterm
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=834
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:SELINUX_LEVEL_REQUESTED=
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:DISPLAY=localhost:11.0
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:SELINUX_ROLE_REQUESTED=
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/home/centos/hadoop-2.7.6/etc/hadoop
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=root
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/hadoop/pid/hbase-root-master.znode
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:SSH_TTY=/dev/pts/2
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:SSH_CLIENT=10.61.45.103 63244 22
2018-06-19 20:40:56,152 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-root-master-datanode1
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/home/centos/hbase-2.0.0/logs
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:USER=root
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: rotobuf-java-2.5.0.jar:/home/centos/hbase-2.0.0/lib/slf4j-api-1.7.25.jar:/home/centos/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar:/home/centos/hbase-2.0.0/lib/snappy-java-1.0.5.jar:/home/centos/hbase-2.0.0/lib/spymemcached-2.12.2.jar:/home/centos/hbase-2.0.0/lib/validation-api-1.1.0.Final.jar:/home/centos/hbase-2.0.0/lib/xmlenc-0.52.jar:/home/centos/hbase-2.0.0/lib/xz-1.0.jar:/home/centos/hbase-2.0.0/lib/zookeeper-3.4.10.jar:/home/centos/hadoop-2.7.6/etc/hadoop:/home/centos/hadoop-2.7.6/share/hadoop/common/lib/*:/home/centos/hadoop-2.7.6/share/hadoop/common/*:/home/centos/hadoop-2.7.6/share/hadoop/hdfs:/home/centos/hadoop-2.7.6/share/hadoop/hdfs/lib/*:/home/centos/hadoop-2.7.6/share/hadoop/hdfs/*:/home/centos/hadoop-2.7.6/share/hadoop/yarn/lib/*:/home/centos/hadoop-2.7.6/share/hadoop/yarn/*:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/lib/*:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/*:/home/centos/hadoop-2.7.6/contrib/capacity-scheduler/*.jar:/home/centos/hadoop-2.7.6/etc/hadoop
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=10.61.45.103 63244 192.168.0.142 22
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/hadoop/pid/hbase-root-master.autostart
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:HOSTNAME=datanode1
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:KDEDIRS=/usr
2018-06-19 20:40:56,153 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/0
2018-06-19 20:40:56,154 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2018-06-19 20:40:56,154 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/home/centos/hbase-2.0.0
2018-06-19 20:40:56,154 INFO [main] util.ServerCommandLine: env:HOME=/root
2018-06-19 20:40:56,154 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2018-06-19 20:40:56,155 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.131-b11
2018-06-19 20:40:56,156 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/home/centos/hbase-2.0.0/logs, -Dhbase.log.file=hbase-root-master-datanode1.log, -Dhbase.home.dir=/home/centos/hbase-2.0.0, -Dhbase.id.str=root, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/home/centos/hadoop-2.7.6/lib/native, -Dhbase.security.logger=INFO,RFAS]
2018-06-19 20:40:56,746 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2018-06-19 20:40:57,300 INFO [main] regionserver.RSRpcServices: master/datanode1:16000 server-side Connection retries=45
2018-06-19 20:40:57,336 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2018-06-19 20:40:57,338 INFO [main] ipc.RpcExecutor: Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2018-06-19 20:40:57,338 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2018-06-19 20:40:57,567 INFO [main] ipc.RpcServerFactory: Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService
2018-06-19 20:40:57,952 INFO [main] ipc.NettyRpcServer: Bind to /192.168.0.142:16000
2018-06-19 20:40:58,066 INFO [main] hfile.CacheConfig: Allocating onheap LruBlockCache size=366.60 MB, blockSize=64 KB
2018-06-19 20:40:58,080 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache\{blockCount=0, currentSize=275.94 KB, freeSize=366.33 MB, maxSize=366.60 MB, heapSize=275.94 KB, minSize=348.27 MB, minFactor=0.95, multiSize=174.13 MB, multiFactor=0.5, singleSize=87.07 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2018-06-19 20:40:58,081 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache\{blockCount=0, currentSize=275.94 KB, freeSize=366.33 MB, maxSize=366.60 MB, heapSize=275.94 KB, minSize=348.27 MB, minFactor=0.95, multiSize=174.13 MB, multiFactor=0.5, singleSize=87.07 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2018-06-19 20:40:58,747 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:58,980 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,379 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,379 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,396 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2018-06-19 20:40:59,400 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,400 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,403 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,403 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,404 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2018-06-19 20:40:59,406 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,406 WARN [main] hdfs.DFSUtil: Namenode for cengine remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2018-06-19 20:40:59,500 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=zknode:2181,zknode:2182,zknode:2183
2018-06-19 20:40:59,514 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=datanode1
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_131
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_131/jre
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: 9.13.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6-tests.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.6.jar:/home/centos/hadoop-2.7.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6.jar:/home/centos/hadoop-2.7.6/contrib/capacity-scheduler/*.jar:/home/centos/hadoop-2.7.6/etc/hadoop
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/centos/hadoop-2.7.6/lib/native
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-123.el7.x86_64
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root
2018-06-19 20:40:59,515 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/centos/hbase-2.0.0/conf
2018-06-19 20:40:59,516 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=zknode:2181,zknode:2182,zknode:2183 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@2a066689
2018-06-19 20:40:59,607 INFO [main-SendThread(zknode:2183)] zookeeper.ClientCnxn: Opening socket connection to server zknode/192.168.0.54:2183. Will not attempt to authenticate using SASL (unknown error)
2018-06-19 20:40:59,647 INFO [main-SendThread(zknode:2183)] zookeeper.ClientCnxn: Socket connection established to zknode/192.168.0.54:2183, initiating session
2018-06-19 20:40:59,667 INFO [main-SendThread(zknode:2183)] zookeeper.ClientCnxn: Session establishment complete on server zknode/192.168.0.54:2183, sessionid = 0x36417e2dea40005, negotiated timeout = 40000
2018-06-19 20:40:59,947 INFO [main] util.log: Logging initialized @4817ms
2018-06-19 20:41:00,072 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2018-06-19 20:41:00,094 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2018-06-19 20:41:00,094 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2018-06-19 20:41:00,096 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2018-06-19 20:41:00,097 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-06-19 20:41:00,097 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-06-19 20:41:00,132 INFO [main] http.HttpServer: Jetty bound to port 60010
2018-06-19 20:41:00,133 INFO [main] server.Server: jetty-9.3.19.v20170502
2018-06-19 20:41:00,214 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@711d1a52\{/logs,file:///home/centos/hbase-2.0.0/logs/,AVAILABLE}
2018-06-19 20:41:00,215 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@302edb74\{/static,file:///home/centos/hbase-2.0.0/hbase-webapps/static/,AVAILABLE}
2018-06-19 20:41:00,445 INFO [main] handler.ContextHandler: Started o.e.j.w.WebAppContext@53cf9c99\{/,file:///home/centos/hbase-2.0.0/hbase-webapps/master/,AVAILABLE}{file:/home/centos/hbase-2.0.0/hbase-webapps/master}
2018-06-19 20:41:00,450 INFO [main] server.AbstractConnector: Started ServerConnector@74ea46e2\{HTTP/1.1,[http/1.1]}{0.0.0.0:60010}
2018-06-19 20:41:00,451 INFO [main] server.Server: Started @5321ms
2018-06-19 20:41:00,454 INFO [main] master.HMaster: hbase.rootdir=hdfs://cengine/hbase, hbase.cluster.distributed=true
2018-06-19 20:41:00,509 INFO [master/datanode1:16000] master.HMaster: Adding backup master ZNode /hbase/backup-masters/datanode1,16000,1529412056193
2018-06-19 20:41:00,608 INFO [master/datanode1:16000] master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/datanode1,16000,1529412056193 from backup master directory
2018-06-19 20:41:00,623 INFO [master/datanode1:16000] master.ActiveMasterManager: Registered as active master=datanode1,16000,1529412056193
2018-06-19 20:41:00,629 INFO [master/datanode1:16000] regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk size 2 MB, max count 164, initial count 0
2018-06-19 20:41:00,632 INFO [master/datanode1:16000] regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 183, initial count 0
2018-06-19 20:41:00,728 INFO [master/datanode1:16000] retry.RetryInvocationHandler: Exception while invoking setSafeMode of class ClientNamenodeProtocolTranslatorPB over secnamenode:8020 after 1 fail over attempts. Trying to fail over after sleeping for 699ms.
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "secnamenode":8020; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
 at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744)
 at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:410)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
 at org.apache.hadoop.ipc.Client.call(Client.java:1452)
 at org.apache.hadoop.ipc.Client.call(Client.java:1413)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
 at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2610)
 at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:285)
 at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:697)
 at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:239)
 at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
 at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
 at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:795)
 at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2019)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:553)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException
 at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:411)
 ... 34 more
2018-06-19 20:41:01,430 INFO [master/datanode1:16000] retry.RetryInvocationHandler: Exception while invoking setSafeMode of class ClientNamenodeProtocolTranslatorPB over namenode:8020 after 2 fail over attempts. Trying to fail over after sleeping for 2117ms.
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "namenode":8020; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
 at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:744)
 at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:410)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
 at org.apache.hadoop.ipc.Client.call(Client.java:1452)
 at org.apache.hadoop.ipc.Client.call(Client.java:1413)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
 at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:671)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
 at com.sun.proxy.$Proxy20.setSafeMode(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2610)
 at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1223)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:285)
 at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:697)
 at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:239)
 at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
 at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
 at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:795)
 at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2019)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:553)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException
 at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:411)
 ... 34 more
2018-06-19 20:41:03,549 INFO [master/datanode1:16000] retry.RetryInvocationHandler: Exception while invoking setSafeMode of class ClientNamenodeProtocolTranslatorPB over secnamenode:8020 after 3 fail over attempts. Trying to fail over after sleeping for 2088ms.
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "secnamenode":8020; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)