You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by jcl <51...@163.com> on 2018/08/14 12:31:34 UTC

HBase 2.1.0 - RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure

Power off and restart(Hadoop and HBase), Master is initializing - Hbase ServerManager: but crash processing already in progress

command jps, HMaster and HRegionServer is live


WARN  [Thread-14] master.ServerManager: Expiration called on hbase-115,16020,1534248994825 but crash processing already in progress
WARN  [Thread-14] master.ServerManager: Expiration called on hbase-116,16020,1534248590107 but crash processing already in progress
WARN  [Thread-14] master.ServerManager: Expiration called on hbase-115,16020,1534249077856 but crash processing already in progress
WARN  [Thread-14] master.ServerManager: Expiration called on hbase-116,16020,1534248994045 but crash processing already in progress
WARN  [Thread-14] master.ServerManager: Expiration called on hbase-115,16020,1534248708149 but crash processing already in progress
WARN  [Thread-14] master.ServerManager: Expiration called on hbase-116,16020,1534248707381 but crash processing already in progress

 

LOG:

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 64091
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 64091
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2018-08-14 17:25:00,173 INFO [main] master.HMaster: STARTING service HMaster
2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: HBase 2.1.0
2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Source code repository revision=4531d1c947a25b28a9a994b60c791a112c12a2b4
2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Compiled by hbase on Wed Aug 1 11:25:59 2018
2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: From source with checksum fc32566f7e030ff71458fbf6dc77bce9
2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-root
2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.rootdir: hdfs://192.168.101.114:9000/hbase
2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: 192.168.101.114:2181
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:PATH=/opt/apache-phoenix-5.0.0-HBase-2.0-bin/bin:/opt/hbase-2.1.0/bin:/opt/hadoop-2.8.4/bin:/opt/jdk1.8.0_172/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:HADOOP_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:JAVA_LIBRARY_PATH=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:HBASE_REGIONSERVER_OPTS= -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:HBASE_CONF_DIR=/opt/hbase-2.1.0/conf
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:HDFS_DATANODE_SECURE_USER=root
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/root
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:PHOENIX_HOME=/opt/apache-phoenix-5.0.0-HBase-2.0-bin
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:LD_LIBRARY_PATH=:/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine: env:LOGNAME=root
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:PWD=/opt/hbase-2.1.0/bin
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HADOOP_PREFIX=/opt/hadoop-2.8.4
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HADOOP_INSTALL=/opt/hadoop-2.8.4
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:SELINUX_USE_CURRENT_RANGE=
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:YARN_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HADOOP_YARN_HOME=/opt/hadoop-2.8.4
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_MASTER_OPTS= -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HADOOP_HOME=/opt/hadoop-2.8.4
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070 -Dhbase.log.dir=/opt/hbase-2.1.0/logs -Dhbase.log.file=hbase-root-master-hbase-114.log -Dhbase.home.dir=/opt/hbase-2.1.0 -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native: -Dhbase.security.logger=INFO,RFAS
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HDFS_DATANODE_USER=root
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: 1:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=01;36:.au=01;36:.flac=01;36:.mid=01;36:.midi=01;36:.mka=01;36:.mp3=01;36:.mpc=01;36:.ogg=01;36:.ra=01;36:.wav=01;36:.axa=01;36:.oga=01;36:.spx=01;36:*.xspf=01;36:
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:SHLVL=4
2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-root-master-hbase-114.log
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HISTSIZE=1000
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/opt/jdk1.8.0_172
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:TERM=xterm
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:LANG=zh_CN.UTF-8
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=1
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:SELINUX_LEVEL_REQUESTED=
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HADOOP_LIBEXEC_DIR=/opt/hadoop-2.8.4/libexec
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:SELINUX_ROLE_REQUESTED=
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HADOOP_HDFS_HOME=/opt/hadoop-2.8.4
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HADOOP_MAPRED_HOME=/opt/hadoop-2.8.4
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HADOOP_COMMON_HOME=/opt/hadoop-2.8.4
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HADOOP_OPTS=-Djava.library.path=/opt/hadoop-2.8.4/lib:/opt/hadoop-2.8.4/lib/native
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=root
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/tmp/hbase-root-master.znode
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:SSH_TTY=/dev/pts/0
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:SSH_CLIENT=192.168.98.129 35604 22
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-root-master-hbase-114
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/opt/hbase-2.1.0/logs
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:USER=root
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: dparty/commons-logging-1.2.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/log4j-1.2.17.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-5.0.0-HBase-2.0-server.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-core-5.0.0-HBase-2.0.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/opt/hadoop-2.8.4/etc/hadoop:/opt/hadoop-2.8.4/share/hadoop/common/lib/:/opt/hadoop-2.8.4/share/hadoop/common/:/opt/hadoop-2.8.4/share/hadoop/hdfs:/opt/hadoop-2.8.4/share/hadoop/hdfs/lib/:/opt/hadoop-2.8.4/share/hadoop/hdfs/:/opt/hadoop-2.8.4/share/hadoop/yarn/lib/:/opt/hadoop-2.8.4/share/hadoop/yarn/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HDFS_NAMENODE_USER=root
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=192.168.98.129 35604 192.168.101.114 22
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/tmp/hbase-root-master.autostart
2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:HOSTNAME=hbase-114
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HADOOP_COMMON_LIB_NATIVE_DIR=/opt/hadoop-2.8.4/lib/native
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/0
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HDFS_SECONDARYNAMENODE_USER=root
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/opt/hbase-2.1.0
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HOME=/root
2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.172-b11
2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Xdebug, -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070, -Dhbase.log.dir=/opt/hbase-2.1.0/logs, -Dhbase.log.file=hbase-root-master-hbase-114.log, -Dhbase.home.dir=/opt/hbase-2.1.0, -Dhbase.id.str=root, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:, -Dhbase.security.logger=INFO,RFAS]
2018-08-14 17:25:00,886 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2018-08-14 17:25:01,258 INFO [main] regionserver.RSRpcServices: master/hbase-114:16000 server-side Connection retries=3
2018-08-14 17:25:01,278 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2018-08-14 17:25:01,418 INFO [main] ipc.RpcServerFactory: Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.ClientService, hbase.pb.AdminService
2018-08-14 17:25:01,632 INFO [main] ipc.NettyRpcServer: Bind to /192.168.101.114:16000
2018-08-14 17:25:01,688 INFO [main] hfile.CacheConfig: Allocating onheap LruBlockCache size=1.55 GB, blockSize=64 KB
2018-08-14 17:25:01,694 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB, freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB, minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2018-08-14 17:25:01,695 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB, freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB, minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2018-08-14 17:25:02,160 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2018-08-14 17:25:02,163 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2018-08-14 17:25:02,233 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=192.168.101.114:2181
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hbase-114
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_172
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.8.0_172/jre
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: o-2.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-862.el7.x86_64
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root
2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/opt/hbase-2.1.0/bin
2018-08-14 17:25:02,240 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.101.114:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@4ae2e781
2018-08-14 17:25:02,256 INFO [main-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate using SASL (unknown error)
2018-08-14 17:25:02,264 INFO [main-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.101.114/192.168.101.114:2181, initiating session
2018-08-14 17:25:02,282 INFO [main-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970000, negotiated timeout = 40000
2018-08-14 17:25:02,352 INFO [main] util.log: Logging initialized @2552ms
2018-08-14 17:25:02,413 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-08-14 17:25:02,455 INFO [main] http.HttpServer: Jetty bound to port 16010
2018-08-14 17:25:02,456 INFO [main] server.Server: jetty-9.3.19.v20170502
2018-08-14 17:25:02,489 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@49232c6f{/logs,file:///opt/hbase-2.1.0/logs/,AVAILABLE}
2018-08-14 17:25:02,490 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@279126f5{/static,file:///opt/hbase-2.1.0/hbase-webapps/static/,AVAILABLE}
2018-08-14 17:25:02,582 INFO [main] handler.ContextHandler: Started o.e.j.w.WebAppContext@537b3b2e{/,file:///opt/hbase-2.1.0/hbase-webapps/master/,AVAILABLE}

{file:/opt/hbase-2.1.0/hbase-webapps/master}

2018-08-14 17:25:02,587 INFO [main] server.AbstractConnector: Started ServerConnector@550c973e{HTTP/1.1,[http/1.1]}

{0.0.0.0:16010}

2018-08-14 17:25:02,587 INFO [main] server.Server: Started @2787ms
2018-08-14 17:25:02,590 INFO [main] master.HMaster: hbase.rootdir=hdfs://192.168.101.114:9000/hbase, hbase.cluster.distributed=true
2018-08-14 17:25:02,606 INFO [Thread-14] master.HMaster: Adding backup master ZNode /hbase/backup-masters/hbase-114,16000,1534238700547
2018-08-14 17:25:02,685 INFO [Thread-14] master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/hbase-114,16000,1534238700547 from backup master directory
2018-08-14 17:25:02,691 INFO [Thread-14] master.ActiveMasterManager: Registered as active master=hbase-114,16000,1534238700547
2018-08-14 17:25:02,697 INFO [Thread-14] regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk size 2 MB, max count 713, initial count 0
2018-08-14 17:25:02,698 INFO [Thread-14] regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 792, initial count 0
2018-08-14 17:25:02,992 INFO [Thread-14] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2018-08-14 17:25:03,001 INFO [Thread-14] coordination.SplitLogManagerCoordination: Found 0 orphan tasks and 0 rescan nodes
2018-08-14 17:25:03,094 INFO [Thread-14] zookeeper.ReadOnlyZKClient: Connect 0x66461af1 to 192.168.101.114:2181 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms
2018-08-14 17:25:03,100 INFO [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.101.114:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$52/1619197561@6e451c19
2018-08-14 17:25:03,101 INFO [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate using SASL (unknown error)
2018-08-14 17:25:03,101 INFO [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.101.114/192.168.101.114:2181, initiating session
2018-08-14 17:25:03,104 INFO [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970001, negotiated timeout = 40000
2018-08-14 17:25:03,145 INFO [Thread-14] procedure2.ProcedureExecutor: Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker count=160
2018-08-14 17:25:03,149 INFO [Thread-14] util.FSHDFSUtils: Recover lease on dfs file hdfs://192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
2018-08-14 17:25:03,153 INFO [Thread-14] util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log after 4ms
2018-08-14 17:25:03,188 WARN [Thread-14] util.CommonFSUtils: Your Hadoop installation does not include the StreamCapabilities class from HDFS-11644, so we will skip checking if any FSDataOutputStreams actually support hflush/hsync. If you are running on top of HDFS this probably just means you have an older version and this can be ignored. If you are running on top of an alternate FileSystem implementation you should manually verify that hflush and hsync are implemented; otherwise you risk data loss and hard to diagnose errors when our assumptions are violated.
2018-08-14 17:25:03,189 INFO [Thread-14] wal.WALProcedureStore: Rolled new Procedure Store WAL, id=5
2018-08-14 17:25:03,190 INFO [Thread-14] procedure2.ProcedureExecutor: Recovered WALProcedureStore lease in 42msec
2018-08-14 17:25:03,224 INFO [Thread-14] procedure2.ProcedureExecutor: Loaded WALProcedureStore in 33msec
2018-08-14 17:25:03,224 INFO [Thread-14] procedure2.RemoteProcedureDispatcher: Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150
2018-08-14 17:25:03,261 WARN [Thread-14] master.ServerManager: Expiration of hbase-116,16020,1534237430655 but server not online
2018-08-14 17:25:03,261 INFO [Thread-14] master.ServerManager: Processing expiration of hbase-116,16020,1534237430655 on hbase-114,16000,1534238700547
2018-08-14 17:25:03,481 WARN [Thread-14] master.ServerManager: Expiration of hbase-115,16020,1534237425729 but server not online
2018-08-14 17:25:03,481 INFO [Thread-14] master.ServerManager: Processing expiration of hbase-115,16020,1534237425729 on hbase-114,16000,1534238700547
2018-08-14 17:25:03,622 INFO [Thread-14] balancer.BaseLoadBalancer: slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false
2018-08-14 17:25:03,629 INFO [Thread-14] balancer.StochasticLoadBalancer: Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, etc.
2018-08-14 17:25:03,669 INFO [Thread-14] master.HMaster: Active/primary master=hbase-114,16000,1534238700547, sessionid=0x10004dac1970000, setting cluster-up flag (Was=false)
2018-08-14 17:25:03,771 INFO [PEWorker-4] procedure.ServerCrashProcedure: Start pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=hbase-115,16020,1534237425729, splitWal=true, meta=false
2018-08-14 17:25:03,772 INFO [Thread-14] procedure2.TimeoutExecutorThread: ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner; timeout=30000, timestamp=1534238733772
2018-08-14 17:25:03,774 INFO [PEWorker-3] procedure.ServerCrashProcedure: Start pid=11, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure server=hbase-116,16020,1534237430655, splitWal=true, meta=false
2018-08-14 17:25:03,775 INFO [Thread-14] cleaner.CleanerChore: Cleaner pool size is 1
2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.RecoverableZooKeeper: Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=192.168.101.114:2181
2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.101.114:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@46bb7405
2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate using SASL (unknown error)
2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.101.114/192.168.101.114:2181, initiating session
2018-08-14 17:25:03,777 INFO [Thread-14] cleaner.LogCleaner: Creating OldWALs cleaners with size=2
2018-08-14 17:25:03,780 INFO [Thread-14-SendThread(192.168.101.114:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970006, negotiated timeout = 40000
2018-08-14 17:25:03,967 INFO [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] master.ServerManager: Registering regionserver=hbase-116,16020,1534238701517
2018-08-14 17:25:03,967 INFO [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000] master.ServerManager: Registering regionserver=hbase-115,16020,1534238702258
2018-08-14 17:25:04,022 INFO [RegionServerTracker-0] master.RegionServerTracker: RegionServer ephemeral node created, adding [hbase-116,16020,1534238701517]
2018-08-14 17:25:04,023 INFO [RegionServerTracker-0] master.RegionServerTracker: RegionServer ephemeral node created, adding [hbase-115,16020,1534238702258]
2018-08-14 17:25:33,877 INFO [WALProcedureStoreSyncThread] wal.ProcedureWALFile: Archiving hdfs://192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log to hdfs://192.168.101.114:9000/hbase/oldWALs/pv2-00000000000000000004.log
2018-08-14 17:26:59,875 WARN [qtp1304765785-87] servlet.ServletHandler: /master-status
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.isInMaintenanceMode(HMaster.java:2890)
at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:277)
at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:395)
at org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:386)
at org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:81)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)

Re: HBase 2.1.0 - RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure

Posted by Ted Yu <yu...@gmail.com>.
jcl:
I can see that DEBUG log wasn't turned on.

Can you set log4j to DEBUG level and see if there is more information ?

Cheers

On Tue, Aug 14, 2018 at 6:56 AM Allan Yang <al...@apache.org> wrote:

> Those log is not enough to locate the problem.
> Best Regards
> Allan Yang
>
>
> jcl <51...@163.com> 于2018年8月14日周二 下午9:18写道:
>
> > Power off and restart(Hadoop and HBase), Master is initializing - Hbase
> > ServerManager: but crash processing already in progress
> >
> > command jps, HMaster and HRegionServer is live
> >
> >
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534248994825 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248590107 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534249077856 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248994045 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-115,16020,1534248708149 but crash processing already in progress
> > WARN  [Thread-14] master.ServerManager: Expiration called on
> > hbase-116,16020,1534248707381 but crash processing already in progress
> >
> >
> >
> > LOG:
> >
> > core file size (blocks, -c) 0
> > data seg size (kbytes, -d) unlimited
> > scheduling priority (-e) 0
> > file size (blocks, -f) unlimited
> > pending signals (-i) 64091
> > max locked memory (kbytes, -l) 64
> > max memory size (kbytes, -m) unlimited
> > open files (-n) 1024
> > pipe size (512 bytes, -p) 8
> > POSIX message queues (bytes, -q) 819200
> > real-time priority (-r) 0
> > stack size (kbytes, -s) 8192
> > cpu time (seconds, -t) unlimited
> > max user processes (-u) 64091
> > virtual memory (kbytes, -v) unlimited
> > file locks (-x) unlimited
> > 2018-08-14 17:25:00,173 INFO [main] master.HMaster: STARTING service
> > HMaster
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: HBase 2.1.0
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Source code
> > repository revision=4531d1c947a25b28a9a994b60c791a112c12a2b4
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Compiled by hbase
> on
> > Wed Aug 1 11:25:59 2018
> > 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: From source with
> > checksum fc32566f7e030ff71458fbf6dc77bce9
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> hbase.tmp.dir:
> > /tmp/hbase-root
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> hbase.rootdir:
> > hdfs://192.168.101.114:9000/hbase
> > 2018-08-14 <http://192.168.101.114:9000/hbase2018-08-14> 17:25:00,516
> > INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
> > 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> > hbase.zookeeper.quorum: 192.168.101.114:2181
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:PATH=/opt/apache-phoenix-5.0.0-HBase-2.0-bin/bin:/opt/hbase-2.1.0/bin:/opt/hadoop-2.8.4/bin:/opt/jdk1.8.0_172/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HADOOP_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HISTCONTROL=ignoredups
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:JAVA_LIBRARY_PATH=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HBASE_REGIONSERVER_OPTS= -Xdebug -Xnoagent -Djava.compiler=NONE
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HBASE_CONF_DIR=/opt/hbase-2.1.0/conf
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:HDFS_DATANODE_SECURE_USER=root
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:MAIL=/var/spool/mail/root
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:PHOENIX_HOME=/opt/apache-phoenix-5.0.0-HBase-2.0-bin
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> >
> env:LD_LIBRARY_PATH=:/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> > env:LOGNAME=root
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_REST_OPTS=
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:PWD=/opt/hbase-2.1.0/bin
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_PREFIX=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_INSTALL=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_ROOT_LOGGER=INFO,RFA
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:LESSOPEN=||/usr/bin/lesspipe.sh %s
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:SHELL=/bin/bash
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:SELINUX_USE_CURRENT_RANGE=
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:YARN_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_ENV_INIT=true
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_YARN_HOME=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_MASTER_OPTS= -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_MANAGES_ZK=false
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HADOOP_HOME=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_NICENESS=0
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Xdebug
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070
> > -Dhbase.log.dir=/opt/hbase-2.1.0/logs
> > -Dhbase.log.file=hbase-root-master-hbase-114.log
> > -Dhbase.home.dir=/opt/hbase-2.1.0 -Dhbase.id.str=root
> > -Dhbase.root.logger=INFO,RFA
> >
> -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > -Dhbase.security.logger=INFO,RFAS
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_SECURITY_LOGGER=INFO,RFAS
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HDFS_DATANODE_USER=root
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> >
> 1:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=01;36:.au=01;36:.flac=01;36:.mid=01;36:.midi=01;36:.mka=01;36:.mp3=01;36:.mpc=01;36:.ogg=01;36:.ra=01;36:.wav=01;36:.axa=01;36:.oga=01;36:.spx=01;36:*.xspf=01;36:
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:SHLVL=4
> > 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> > env:HBASE_LOGFILE=hbase-root-master-hbase-114.log
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HISTSIZE=1000
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:JAVA_HOME=/opt/jdk1.8.0_172
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:TERM=xterm
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:LANG=zh_CN.UTF-8
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:XDG_SESSION_ID=1
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:SELINUX_LEVEL_REQUESTED=
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HADOOP_LIBEXEC_DIR=/opt/hadoop-2.8.4/libexec
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:SELINUX_ROLE_REQUESTED=
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HADOOP_HDFS_HOME=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HADOOP_MAPRED_HOME=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HADOOP_COMMON_HOME=/opt/hadoop-2.8.4
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> >
> env:HADOOP_OPTS=-Djava.library.path=/opt/hadoop-2.8.4/lib:/opt/hadoop-2.8.4/lib/native
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HBASE_IDENT_STRING=root
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HBASE_ZNODE_FILE=/tmp/hbase-root-master.znode
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:SSH_TTY=/dev/pts/0
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:SSH_CLIENT=192.168.98.129 35604 22
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HBASE_LOG_PREFIX=hbase-root-master-hbase-114
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HBASE_LOG_DIR=/opt/hbase-2.1.0/logs
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:USER=root
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> >
> dparty/commons-logging-1.2.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/log4j-1.2.17.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-5.0.0-HBase-2.0-server.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-core-5.0.0-HBase-2.0.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/opt/hadoop-2.8.4/etc/hadoop:/opt/hadoop-2.8.4/share/hadoop/common/lib/:/opt/hadoop-2.8.4/share/hadoop/common/:/opt/hadoop-2.8.4/share/hadoop/hdfs:/opt/hadoop-2.8.4/share/hadoop/hdfs/lib/:/opt/hadoop-2.8.4/share/hadoop/hdfs/:/opt/hadoop-2.8.4/share/hadoop/yarn/lib/:/opt/hadoop-2.8.4/share/hadoop/yarn/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HDFS_NAMENODE_USER=root
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:SSH_CONNECTION=192.168.98.129 35604 192.168.101.114 22
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HBASE_AUTOSTART_FILE=/tmp/hbase-root-master.autostart
> > 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> > env:HOSTNAME=hbase-114
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:HADOOP_COMMON_LIB_NATIVE_DIR=/opt/hadoop-2.8.4/lib/native
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:XDG_RUNTIME_DIR=/run/user/0
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:HDFS_SECONDARYNAMENODE_USER=root
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:HBASE_THRIFT_OPTS=
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:HBASE_HOME=/opt/hbase-2.1.0
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:HOME=/root
> > 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> > env:MALLOC_ARENA_MAX=4
> > 2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine: vmName=Java
> > HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
> > vmVersion=25.172-b11
> > 2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine:
> > vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p,
> > -XX:+UseConcMarkSweepGC, -Xdebug,
> > -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070,
> > -Dhbase.log.dir=/opt/hbase-2.1.0/logs,
> > -Dhbase.log.file=hbase-root-master-hbase-114.log,
> > -Dhbase.home.dir=/opt/hbase-2.1.0, -Dhbase.id.str=root,
> > -Dhbase.root.logger=INFO,RFA,
> >
> -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:,
> > -Dhbase.security.logger=INFO,RFAS]
> > 2018-08-14 17:25:00,886 INFO [main] metrics.MetricRegistries: Loaded
> > MetricRegistries class
> > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> > 2018-08-14 17:25:01,258 INFO [main] regionserver.RSRpcServices:
> > master/hbase-114:16000 server-side Connection retries=3
> > 2018-08-14 17:25:01,278 INFO [main] ipc.RpcExecutor: Instantiated
> > default.FPBQ.Fifo with queueClass=class
> > java.util.concurrent.LinkedBlockingQueue; numCallQueues=3,
> > maxQueueLength=300, handlerCount=30
> > 2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated
> > priority.FPBQ.Fifo with queueClass=class
> > java.util.concurrent.LinkedBlockingQueue; numCallQueues=2,
> > maxQueueLength=300, handlerCount=20
> > 2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated
> > replication.FPBQ.Fifo with queueClass=class
> > java.util.concurrent.LinkedBlockingQueue; numCallQueues=1,
> > maxQueueLength=300, handlerCount=3
> > 2018-08-14 17:25:01,418 INFO [main] ipc.RpcServerFactory: Creating
> > org.apache.hadoop.hbase.ipc.NettyRpcServer hosting
> hbase.pb.MasterService,
> > hbase.pb.RegionServerStatusService, hbase.pb.LockService,
> > hbase.pb.ClientService, hbase.pb.AdminService
> > 2018-08-14 17:25:01,632 INFO [main] ipc.NettyRpcServer: Bind to /
> > 192.168.101.114:16000
> > 2018-08-14 17:25:01,688 INFO [main] hfile.CacheConfig: Allocating onheap
> > LruBlockCache size=1.55 GB, blockSize=64 KB
> > 2018-08-14 17:25:01,694 INFO [main] hfile.CacheConfig: Created
> > cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB,
> > freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB,
> > minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40
> MB,
> > singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> > cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> > cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
> > 2018-08-14 17:25:01,695 INFO [main] hfile.CacheConfig: Created
> > cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB,
> > freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB,
> > minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40
> MB,
> > singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> > cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> > cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
> > 2018-08-14 17:25:02,160 INFO [main] fs.HFileSystem: Added intercepting
> > call to namenode#getBlockLocations so can do block reordering using class
> > org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> > 2018-08-14 17:25:02,163 INFO [main] fs.HFileSystem: Added intercepting
> > call to namenode#getBlockLocations so can do block reordering using class
> > org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> > 2018-08-14 17:25:02,233 INFO [main] zookeeper.RecoverableZooKeeper:
> > Process identifier=master:16000 connecting to ZooKeeper ensemble=
> > 192.168.101.114:2181
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> >
> environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f,
> > built on 03/23/2017 10:13 GMT
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:host.name=hbase-114
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.version=1.8.0_172
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.vendor=Oracle Corporation
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.home=/opt/jdk1.8.0_172/jre
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper:
> >
> o-2.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> >
> environment:java.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.io.tmpdir=/tmp
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.compiler=<NA>
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:os.name=Linux
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:os.arch=amd64
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:os.version=3.10.0-862.el7.x86_64
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:user.name=root
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:user.home=/root
> > 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> > environment:user.dir=/opt/hbase-2.1.0/bin
> > 2018-08-14 17:25:02,240 INFO [main] zookeeper.ZooKeeper: Initiating
> client
> > connection, connectString=192.168.101.114:2181 sessionTimeout=90000
> > watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@4ae2e781
> > 2018-08-14 17:25:02,256 INFO [main-SendThread(192.168.101.114:2181)]
> > zookeeper.ClientCnxn: Opening socket connection to server
> > 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate
> > using SASL (unknown error)
> > 2018-08-14 17:25:02,264 INFO [main-SendThread(192.168.101.114:2181)]
> > zookeeper.ClientCnxn: Socket connection established to
> > 192.168.101.114/192.168.101.114:2181, initiating session
> > 2018-08-14 17:25:02,282 INFO [main-SendThread(192.168.101.114:2181)]
> > zookeeper.ClientCnxn: Session establishment complete on server
> > 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970000,
> > negotiated timeout = 40000
> > 2018-08-14 17:25:02,352 INFO [main] util.log: Logging initialized @2552ms
> > 2018-08-14 17:25:02,413 INFO [main] http.HttpRequestLog: Http request log
> > for http.requests.master is not defined
> > 2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter
> > 'safety'
> (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
> > 2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter
> > 'clickjackingprevention'
> > (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
> > 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> > static_user_filter
> >
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context master
> > 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> > static_user_filter
> >
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context static
> > 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> > static_user_filter
> >
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context logs
> > 2018-08-14 17:25:02,455 INFO [main] http.HttpServer: Jetty bound to port
> > 16010
> > 2018-08-14 17:25:02,456 INFO [main] server.Server: jetty-9.3.19.v20170502
> > 2018-08-14 17:25:02,489 INFO [main] handler.ContextHandler: Started
> > o.e.j.s.ServletContextHandler@49232c6f
> > {/logs,file:///opt/hbase-2.1.0/logs/,AVAILABLE}
> > 2018-08-14 17:25:02,490 INFO [main] handler.ContextHandler: Started
> > o.e.j.s.ServletContextHandler@279126f5
> > {/static,file:///opt/hbase-2.1.0/hbase-webapps/static/,AVAILABLE}
> > 2018-08-14 17:25:02,582 INFO [main] handler.ContextHandler: Started
> > o.e.j.w.WebAppContext@537b3b2e
> > {/,file:///opt/hbase-2.1.0/hbase-webapps/master/,AVAILABLE}
> >
> > {file:/opt/hbase-2.1.0/hbase-webapps/master}
> >
> > 2018-08-14 17:25:02,587 INFO [main] server.AbstractConnector: Started
> > ServerConnector@550c973e{HTTP/1.1,[http/1.1]}
> >
> > {0.0.0.0:16010}
> >
> > 2018-08-14 17:25:02,587 INFO [main] server.Server: Started @2787ms
> > 2018-08-14 17:25:02,590 INFO [main] master.HMaster: hbase.rootdir=hdfs://
> > 192.168.101.114:9000/hbase, hbase.cluster.distributed=true
> > 2018-08-14 17:25:02,606 INFO [Thread-14] master.HMaster: Adding backup
> > master ZNode /hbase/backup-masters/hbase-114,16000,1534238700547
> > 2018-08-14 17:25:02,685 INFO [Thread-14] master.ActiveMasterManager:
> > Deleting ZNode for /hbase/backup-masters/hbase-114,16000,1534238700547
> from
> > backup master directory
> > 2018-08-14 17:25:02,691 INFO [Thread-14] master.ActiveMasterManager:
> > Registered as active master=hbase-114,16000,1534238700547
> > 2018-08-14 17:25:02,697 INFO [Thread-14] regionserver.ChunkCreator:
> > Allocating data MemStoreChunkPool with chunk size 2 MB, max count 713,
> > initial count 0
> > 2018-08-14 17:25:02,698 INFO [Thread-14] regionserver.ChunkCreator:
> > Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count
> > 792, initial count 0
> > 2018-08-14 17:25:02,992 INFO [Thread-14] fs.HFileSystem: Added
> > intercepting call to namenode#getBlockLocations so can do block
> reordering
> > using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> > 2018-08-14 17:25:03,001 INFO [Thread-14]
> > coordination.SplitLogManagerCoordination: Found 0 orphan tasks and 0
> rescan
> > nodes
> > 2018-08-14 17:25:03,094 INFO [Thread-14] zookeeper.ReadOnlyZKClient:
> > Connect 0x66461af1 to 192.168.101.114:2181 with session timeout=90000ms,
> > retries 30, retry interval 1000ms, keepAlive=60000ms
> > 2018-08-14 17:25:03,100 INFO
> > [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1] zookeeper.ZooKeeper:
> > Initiating client connection, connectString=192.168.101.114:2181
> > sessionTimeout=90000
> >
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$52/1619197561@6e451c19
> > 2018-08-14 17:25:03,101 INFO
> > [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> > 192.168.101.114:2181)] zookeeper.ClientCnxn: Opening socket connection
> to
> > server 192.168.101.114/192.168.101.114:2181. Will not attempt to
> > authenticate using SASL (unknown error)
> > 2018-08-14 17:25:03,101 INFO
> > [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> > 192.168.101.114:2181)] zookeeper.ClientCnxn: Socket connection
> > established to 192.168.101.114/192.168.101.114:2181, initiating session
> > 2018-08-14 17:25:03,104 INFO
> > [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> > 192.168.101.114:2181)] zookeeper.ClientCnxn: Session establishment
> > complete on server 192.168.101.114/192.168.101.114:2181, sessionid =
> > 0x10004dac1970001, negotiated timeout = 40000
> > 2018-08-14 17:25:03,145 INFO [Thread-14] procedure2.ProcedureExecutor:
> > Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker
> > count=160
> > 2018-08-14 17:25:03,149 INFO [Thread-14] util.FSHDFSUtils: Recover lease
> > on dfs file hdfs://
> > 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
> > 2018-08-14
> > <
> http://192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log2018-08-14
> >
> > 17:25:03,153 INFO [Thread-14] util.FSHDFSUtils: Recovered lease,
> attempt=0
> > on file=hdfs://
> > 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
> > after 4ms
> > 2018-08-14 17:25:03,188 WARN [Thread-14] util.CommonFSUtils: Your Hadoop
> > installation does not include the StreamCapabilities class from
> HDFS-11644,
> > so we will skip checking if any FSDataOutputStreams actually support
> > hflush/hsync. If you are running on top of HDFS this probably just means
> > you have an older version and this can be ignored. If you are running on
> > top of an alternate FileSystem implementation you should manually verify
> > that hflush and hsync are implemented; otherwise you risk data loss and
> > hard to diagnose errors when our assumptions are violated.
> > 2018-08-14 17:25:03,189 INFO [Thread-14] wal.WALProcedureStore: Rolled
> new
> > Procedure Store WAL, id=5
> > 2018-08-14 17:25:03,190 INFO [Thread-14] procedure2.ProcedureExecutor:
> > Recovered WALProcedureStore lease in 42msec
> > 2018-08-14 17:25:03,224 INFO [Thread-14] procedure2.ProcedureExecutor:
> > Loaded WALProcedureStore in 33msec
> > 2018-08-14 17:25:03,224 INFO [Thread-14]
> > procedure2.RemoteProcedureDispatcher: Instantiated, coreThreads=128
> > (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150
> > 2018-08-14 17:25:03,261 WARN [Thread-14] master.ServerManager: Expiration
> > of hbase-116,16020,1534237430655 but server not online
> > 2018-08-14 17:25:03,261 INFO [Thread-14] master.ServerManager: Processing
> > expiration of hbase-116,16020,1534237430655 on
> hbase-114,16000,1534238700547
> > 2018-08-14 17:25:03,481 WARN [Thread-14] master.ServerManager: Expiration
> > of hbase-115,16020,1534237425729 but server not online
> > 2018-08-14 17:25:03,481 INFO [Thread-14] master.ServerManager: Processing
> > expiration of hbase-115,16020,1534237425729 on
> hbase-114,16000,1534238700547
> > 2018-08-14 17:25:03,622 INFO [Thread-14] balancer.BaseLoadBalancer:
> > slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false
> > 2018-08-14 17:25:03,629 INFO [Thread-14] balancer.StochasticLoadBalancer:
> > Loaded config; maxSteps=1000000, stepsPerRegion=800,
> maxRunningTime=30000,
> > isByTable=false, etc.
> > 2018-08-14 17:25:03,669 INFO [Thread-14] master.HMaster: Active/primary
> > master=hbase-114,16000,1534238700547, sessionid=0x10004dac1970000,
> setting
> > cluster-up flag (Was=false)
> > 2018-08-14 17:25:03,771 INFO [PEWorker-4] procedure.ServerCrashProcedure:
> > Start pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure
> > server=hbase-115,16020,1534237425729, splitWal=true, meta=false
> > 2018-08-14 17:25:03,772 INFO [Thread-14]
> procedure2.TimeoutExecutorThread:
> > ADDED pid=-1, state=WAITING_TIMEOUT;
> >
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner;
> > timeout=30000, timestamp=1534238733772
> > 2018-08-14 17:25:03,774 INFO [PEWorker-3] procedure.ServerCrashProcedure:
> > Start pid=11, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure
> > server=hbase-116,16020,1534237430655, splitWal=true, meta=false
> > 2018-08-14 17:25:03,775 INFO [Thread-14] cleaner.CleanerChore: Cleaner
> > pool size is 1
> > 2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.RecoverableZooKeeper:
> > Process identifier=replicationLogCleaner connecting to ZooKeeper
> ensemble=
> > 192.168.101.114:2181
> > 2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.ZooKeeper: Initiating
> > client connection, connectString=192.168.101.114:2181
> > sessionTimeout=90000
> > watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@46bb7405
> > 2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181
> )]
> > zookeeper.ClientCnxn: Opening socket connection to server
> > 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate
> > using SASL (unknown error)
> > 2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181
> )]
> > zookeeper.ClientCnxn: Socket connection established to
> > 192.168.101.114/192.168.101.114:2181, initiating session
> > 2018-08-14 17:25:03,777 INFO [Thread-14] cleaner.LogCleaner: Creating
> > OldWALs cleaners with size=2
> > 2018-08-14 17:25:03,780 INFO [Thread-14-SendThread(192.168.101.114:2181
> )]
> > zookeeper.ClientCnxn: Session establishment complete on server
> > 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970006,
> > negotiated timeout = 40000
> > 2018-08-14 17:25:03,967 INFO
> > [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000]
> > master.ServerManager: Registering
> regionserver=hbase-116,16020,1534238701517
> > 2018-08-14 17:25:03,967 INFO
> > [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000]
> > master.ServerManager: Registering
> regionserver=hbase-115,16020,1534238702258
> > 2018-08-14 17:25:04,022 INFO [RegionServerTracker-0]
> > master.RegionServerTracker: RegionServer ephemeral node created, adding
> > [hbase-116,16020,1534238701517]
> > 2018-08-14 17:25:04,023 INFO [RegionServerTracker-0]
> > master.RegionServerTracker: RegionServer ephemeral node created, adding
> > [hbase-115,16020,1534238702258]
> > 2018-08-14 17:25:33,877 INFO [WALProcedureStoreSyncThread]
> > wal.ProcedureWALFile: Archiving hdfs://
> > 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
> to
> > hdfs://192.168.101.114:9000/hbase/oldWALs/pv2-00000000000000000004.log
> > 2018-08-14
> > <
> http://192.168.101.114:9000/hbase/oldWALs/pv2-00000000000000000004.log2018-08-14
> >
> > 17:26:59,875 WARN [qtp1304765785-87] servlet.ServletHandler:
> /master-status
> > org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
> > at
> >
> org.apache.hadoop.hbase.master.HMaster.isInMaintenanceMode(HMaster.java:2890)
> > at
> >
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:277)
> > at
> >
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:395)
> > at
> >
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:386)
> > at
> >
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:81)
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> > at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> > at
> >
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> > at
> >
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> > at
> >
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> > at
> >
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> > at
> >
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> > at
> >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> > at
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> > at
> >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> > at
> >
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> > at
> >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> > at
> >
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> > at
> >
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> > at
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> > at
> >
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> > at
> >
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> > at org.eclipse.jetty.server.Server.handle(Server.java:534)
> > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> > at
> >
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> > at org.eclipse.jetty.io
> > .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> > at org.eclipse.jetty.io
> > .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> > at java.lang.Thread.run(Thread.java:748)
>

Re: HBase 2.1.0 - RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure

Posted by Allan Yang <al...@apache.org>.
Those log is not enough to locate the problem.
Best Regards
Allan Yang


jcl <51...@163.com> 于2018年8月14日周二 下午9:18写道:

> Power off and restart(Hadoop and HBase), Master is initializing - Hbase
> ServerManager: but crash processing already in progress
>
> command jps, HMaster and HRegionServer is live
>
>
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-115,16020,1534248994825 but crash processing already in progress
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-116,16020,1534248590107 but crash processing already in progress
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-115,16020,1534249077856 but crash processing already in progress
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-116,16020,1534248994045 but crash processing already in progress
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-115,16020,1534248708149 but crash processing already in progress
> WARN  [Thread-14] master.ServerManager: Expiration called on
> hbase-116,16020,1534248707381 but crash processing already in progress
>
>
>
> LOG:
>
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 64091
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 64091
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
> 2018-08-14 17:25:00,173 INFO [main] master.HMaster: STARTING service
> HMaster
> 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: HBase 2.1.0
> 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Source code
> repository revision=4531d1c947a25b28a9a994b60c791a112c12a2b4
> 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: Compiled by hbase on
> Wed Aug 1 11:25:59 2018
> 2018-08-14 17:25:00,174 INFO [main] util.VersionInfo: From source with
> checksum fc32566f7e030ff71458fbf6dc77bce9
> 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.tmp.dir:
> /tmp/hbase-root
> 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine: hbase.rootdir:
> hdfs://192.168.101.114:9000/hbase
> 2018-08-14 <http://192.168.101.114:9000/hbase2018-08-14> 17:25:00,516
> INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
> 2018-08-14 17:25:00,516 INFO [main] util.ServerCommandLine:
> hbase.zookeeper.quorum: 192.168.101.114:2181
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:PATH=/opt/apache-phoenix-5.0.0-HBase-2.0-bin/bin:/opt/hbase-2.1.0/bin:/opt/hadoop-2.8.4/bin:/opt/jdk1.8.0_172/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:HADOOP_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:HISTCONTROL=ignoredups
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:JAVA_LIBRARY_PATH=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:HBASE_REGIONSERVER_OPTS= -Xdebug -Xnoagent -Djava.compiler=NONE
> -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:HBASE_CONF_DIR=/opt/hbase-2.1.0/conf
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:HDFS_DATANODE_SECURE_USER=root
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:MAIL=/var/spool/mail/root
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:PHOENIX_HOME=/opt/apache-phoenix-5.0.0-HBase-2.0-bin
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:LD_LIBRARY_PATH=:/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> 2018-08-14 17:25:00,517 INFO [main] util.ServerCommandLine:
> env:LOGNAME=root
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_REST_OPTS=
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:PWD=/opt/hbase-2.1.0/bin
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HADOOP_PREFIX=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HADOOP_INSTALL=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_ROOT_LOGGER=INFO,RFA
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:LESSOPEN=||/usr/bin/lesspipe.sh %s
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:SHELL=/bin/bash
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:SELINUX_USE_CURRENT_RANGE=
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:YARN_CONF_DIR=/opt/hadoop-2.8.4/etc/hadoop
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_ENV_INIT=true
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HADOOP_YARN_HOME=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_MASTER_OPTS= -Xdebug
> -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_MANAGES_ZK=false
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HADOOP_HOME=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_NICENESS=0
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Xdebug
> -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070
> -Dhbase.log.dir=/opt/hbase-2.1.0/logs
> -Dhbase.log.file=hbase-root-master-hbase-114.log
> -Dhbase.home.dir=/opt/hbase-2.1.0 -Dhbase.id.str=root
> -Dhbase.root.logger=INFO,RFA
> -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> -Dhbase.security.logger=INFO,RFAS
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_SECURITY_LOGGER=INFO,RFAS
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HDFS_DATANODE_USER=root
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> 1:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=01;36:.au=01;36:.flac=01;36:.mid=01;36:.midi=01;36:.mka=01;36:.mp3=01;36:.mpc=01;36:.ogg=01;36:.ra=01;36:.wav=01;36:.axa=01;36:.oga=01;36:.spx=01;36:*.xspf=01;36:
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine: env:SHLVL=4
> 2018-08-14 17:25:00,518 INFO [main] util.ServerCommandLine:
> env:HBASE_LOGFILE=hbase-root-master-hbase-114.log
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HISTSIZE=1000
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:JAVA_HOME=/opt/jdk1.8.0_172
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:TERM=xterm
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:LANG=zh_CN.UTF-8
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:XDG_SESSION_ID=1
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:SELINUX_LEVEL_REQUESTED=
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HADOOP_LIBEXEC_DIR=/opt/hadoop-2.8.4/libexec
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:SELINUX_ROLE_REQUESTED=
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HADOOP_HDFS_HOME=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HADOOP_MAPRED_HOME=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HADOOP_COMMON_HOME=/opt/hadoop-2.8.4
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HADOOP_OPTS=-Djava.library.path=/opt/hadoop-2.8.4/lib:/opt/hadoop-2.8.4/lib/native
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HBASE_IDENT_STRING=root
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HBASE_ZNODE_FILE=/tmp/hbase-root-master.znode
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:SSH_TTY=/dev/pts/0
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:SSH_CLIENT=192.168.98.129 35604 22
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HBASE_LOG_PREFIX=hbase-root-master-hbase-114
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HBASE_LOG_DIR=/opt/hbase-2.1.0/logs
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine: env:USER=root
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> dparty/commons-logging-1.2.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/log4j-1.2.17.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-5.0.0-HBase-2.0-server.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/phoenix-core-5.0.0-HBase-2.0.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/opt/hadoop-2.8.4/etc/hadoop:/opt/hadoop-2.8.4/share/hadoop/common/lib/:/opt/hadoop-2.8.4/share/hadoop/common/:/opt/hadoop-2.8.4/share/hadoop/hdfs:/opt/hadoop-2.8.4/share/hadoop/hdfs/lib/:/opt/hadoop-2.8.4/share/hadoop/hdfs/:/opt/hadoop-2.8.4/share/hadoop/yarn/lib/:/opt/hadoop-2.8.4/share/hadoop/yarn/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/:/opt/hadoop-2.8.4/share/hadoop/mapreduce/:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HDFS_NAMENODE_USER=root
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:SSH_CONNECTION=192.168.98.129 35604 192.168.101.114 22
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HBASE_AUTOSTART_FILE=/tmp/hbase-root-master.autostart
> 2018-08-14 17:25:00,519 INFO [main] util.ServerCommandLine:
> env:HOSTNAME=hbase-114
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:HADOOP_COMMON_LIB_NATIVE_DIR=/opt/hadoop-2.8.4/lib/native
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:XDG_RUNTIME_DIR=/run/user/0
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:HDFS_SECONDARYNAMENODE_USER=root
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:HBASE_THRIFT_OPTS=
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:HBASE_HOME=/opt/hbase-2.1.0
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine: env:HOME=/root
> 2018-08-14 17:25:00,520 INFO [main] util.ServerCommandLine:
> env:MALLOC_ARENA_MAX=4
> 2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine: vmName=Java
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation,
> vmVersion=25.172-b11
> 2018-08-14 17:25:00,521 INFO [main] util.ServerCommandLine:
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p,
> -XX:+UseConcMarkSweepGC, -Xdebug,
> -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070,
> -Dhbase.log.dir=/opt/hbase-2.1.0/logs,
> -Dhbase.log.file=hbase-root-master-hbase-114.log,
> -Dhbase.home.dir=/opt/hbase-2.1.0, -Dhbase.id.str=root,
> -Dhbase.root.logger=INFO,RFA,
> -Djava.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:,
> -Dhbase.security.logger=INFO,RFAS]
> 2018-08-14 17:25:00,886 INFO [main] metrics.MetricRegistries: Loaded
> MetricRegistries class
> org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 2018-08-14 17:25:01,258 INFO [main] regionserver.RSRpcServices:
> master/hbase-114:16000 server-side Connection retries=3
> 2018-08-14 17:25:01,278 INFO [main] ipc.RpcExecutor: Instantiated
> default.FPBQ.Fifo with queueClass=class
> java.util.concurrent.LinkedBlockingQueue; numCallQueues=3,
> maxQueueLength=300, handlerCount=30
> 2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated
> priority.FPBQ.Fifo with queueClass=class
> java.util.concurrent.LinkedBlockingQueue; numCallQueues=2,
> maxQueueLength=300, handlerCount=20
> 2018-08-14 17:25:01,280 INFO [main] ipc.RpcExecutor: Instantiated
> replication.FPBQ.Fifo with queueClass=class
> java.util.concurrent.LinkedBlockingQueue; numCallQueues=1,
> maxQueueLength=300, handlerCount=3
> 2018-08-14 17:25:01,418 INFO [main] ipc.RpcServerFactory: Creating
> org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService,
> hbase.pb.RegionServerStatusService, hbase.pb.LockService,
> hbase.pb.ClientService, hbase.pb.AdminService
> 2018-08-14 17:25:01,632 INFO [main] ipc.NettyRpcServer: Bind to /
> 192.168.101.114:16000
> 2018-08-14 17:25:01,688 INFO [main] hfile.CacheConfig: Allocating onheap
> LruBlockCache size=1.55 GB, blockSize=64 KB
> 2018-08-14 17:25:01,694 INFO [main] hfile.CacheConfig: Created
> cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB,
> freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB,
> minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40 MB,
> singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
> 2018-08-14 17:25:01,695 INFO [main] hfile.CacheConfig: Created
> cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1.16 MB,
> freeSize=1.55 GB, maxSize=1.55 GB, heapSize=1.16 MB, minSize=1.47 GB,
> minFactor=0.95, multiSize=752.80 MB, multiFactor=0.5, singleSize=376.40 MB,
> singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
> 2018-08-14 17:25:02,160 INFO [main] fs.HFileSystem: Added intercepting
> call to namenode#getBlockLocations so can do block reordering using class
> org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2018-08-14 17:25:02,163 INFO [main] fs.HFileSystem: Added intercepting
> call to namenode#getBlockLocations so can do block reordering using class
> org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2018-08-14 17:25:02,233 INFO [main] zookeeper.RecoverableZooKeeper:
> Process identifier=master:16000 connecting to ZooKeeper ensemble=
> 192.168.101.114:2181
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f,
> built on 03/23/2017 10:13 GMT
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:host.name=hbase-114
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.version=1.8.0_172
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracle Corporation
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.home=/opt/jdk1.8.0_172/jre
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper:
> o-2.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/opt/hadoop-2.8.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/opt/hadoop-2.8.4/contrib/capacity-scheduler/*.jar:/opt/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.library.path=/opt/hadoop-2.8.4/lib/native::/opt/hadoop-2.8.4/lib/native:
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:os.name=Linux
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:os.version=3.10.0-862.el7.x86_64
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:user.name=root
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:user.home=/root
> 2018-08-14 17:25:02,239 INFO [main] zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/hbase-2.1.0/bin
> 2018-08-14 17:25:02,240 INFO [main] zookeeper.ZooKeeper: Initiating client
> connection, connectString=192.168.101.114:2181 sessionTimeout=90000
> watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@4ae2e781
> 2018-08-14 17:25:02,256 INFO [main-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server
> 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate
> using SASL (unknown error)
> 2018-08-14 17:25:02,264 INFO [main-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Socket connection established to
> 192.168.101.114/192.168.101.114:2181, initiating session
> 2018-08-14 17:25:02,282 INFO [main-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Session establishment complete on server
> 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970000,
> negotiated timeout = 40000
> 2018-08-14 17:25:02,352 INFO [main] util.log: Logging initialized @2552ms
> 2018-08-14 17:25:02,413 INFO [main] http.HttpRequestLog: Http request log
> for http.requests.master is not defined
> 2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter
> 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
> 2018-08-14 17:25:02,426 INFO [main] http.HttpServer: Added global filter
> 'clickjackingprevention'
> (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
> 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context master
> 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context static
> 2018-08-14 17:25:02,428 INFO [main] http.HttpServer: Added filter
> static_user_filter
> (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context logs
> 2018-08-14 17:25:02,455 INFO [main] http.HttpServer: Jetty bound to port
> 16010
> 2018-08-14 17:25:02,456 INFO [main] server.Server: jetty-9.3.19.v20170502
> 2018-08-14 17:25:02,489 INFO [main] handler.ContextHandler: Started
> o.e.j.s.ServletContextHandler@49232c6f
> {/logs,file:///opt/hbase-2.1.0/logs/,AVAILABLE}
> 2018-08-14 17:25:02,490 INFO [main] handler.ContextHandler: Started
> o.e.j.s.ServletContextHandler@279126f5
> {/static,file:///opt/hbase-2.1.0/hbase-webapps/static/,AVAILABLE}
> 2018-08-14 17:25:02,582 INFO [main] handler.ContextHandler: Started
> o.e.j.w.WebAppContext@537b3b2e
> {/,file:///opt/hbase-2.1.0/hbase-webapps/master/,AVAILABLE}
>
> {file:/opt/hbase-2.1.0/hbase-webapps/master}
>
> 2018-08-14 17:25:02,587 INFO [main] server.AbstractConnector: Started
> ServerConnector@550c973e{HTTP/1.1,[http/1.1]}
>
> {0.0.0.0:16010}
>
> 2018-08-14 17:25:02,587 INFO [main] server.Server: Started @2787ms
> 2018-08-14 17:25:02,590 INFO [main] master.HMaster: hbase.rootdir=hdfs://
> 192.168.101.114:9000/hbase, hbase.cluster.distributed=true
> 2018-08-14 17:25:02,606 INFO [Thread-14] master.HMaster: Adding backup
> master ZNode /hbase/backup-masters/hbase-114,16000,1534238700547
> 2018-08-14 17:25:02,685 INFO [Thread-14] master.ActiveMasterManager:
> Deleting ZNode for /hbase/backup-masters/hbase-114,16000,1534238700547 from
> backup master directory
> 2018-08-14 17:25:02,691 INFO [Thread-14] master.ActiveMasterManager:
> Registered as active master=hbase-114,16000,1534238700547
> 2018-08-14 17:25:02,697 INFO [Thread-14] regionserver.ChunkCreator:
> Allocating data MemStoreChunkPool with chunk size 2 MB, max count 713,
> initial count 0
> 2018-08-14 17:25:02,698 INFO [Thread-14] regionserver.ChunkCreator:
> Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count
> 792, initial count 0
> 2018-08-14 17:25:02,992 INFO [Thread-14] fs.HFileSystem: Added
> intercepting call to namenode#getBlockLocations so can do block reordering
> using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
> 2018-08-14 17:25:03,001 INFO [Thread-14]
> coordination.SplitLogManagerCoordination: Found 0 orphan tasks and 0 rescan
> nodes
> 2018-08-14 17:25:03,094 INFO [Thread-14] zookeeper.ReadOnlyZKClient:
> Connect 0x66461af1 to 192.168.101.114:2181 with session timeout=90000ms,
> retries 30, retry interval 1000ms, keepAlive=60000ms
> 2018-08-14 17:25:03,100 INFO
> [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1] zookeeper.ZooKeeper:
> Initiating client connection, connectString=192.168.101.114:2181
> sessionTimeout=90000
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$52/1619197561@6e451c19
> 2018-08-14 17:25:03,101 INFO
> [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> 192.168.101.114:2181)] zookeeper.ClientCnxn: Opening socket connection to
> server 192.168.101.114/192.168.101.114:2181. Will not attempt to
> authenticate using SASL (unknown error)
> 2018-08-14 17:25:03,101 INFO
> [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> 192.168.101.114:2181)] zookeeper.ClientCnxn: Socket connection
> established to 192.168.101.114/192.168.101.114:2181, initiating session
> 2018-08-14 17:25:03,104 INFO
> [ReadOnlyZKClient-192.168.101.114:2181@0x66461af1-SendThread(
> 192.168.101.114:2181)] zookeeper.ClientCnxn: Session establishment
> complete on server 192.168.101.114/192.168.101.114:2181, sessionid =
> 0x10004dac1970001, negotiated timeout = 40000
> 2018-08-14 17:25:03,145 INFO [Thread-14] procedure2.ProcedureExecutor:
> Starting 16 core workers (bigger of cpus/4 or 16) with max (burst) worker
> count=160
> 2018-08-14 17:25:03,149 INFO [Thread-14] util.FSHDFSUtils: Recover lease
> on dfs file hdfs://
> 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
> 2018-08-14
> <http://192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log2018-08-14>
> 17:25:03,153 INFO [Thread-14] util.FSHDFSUtils: Recovered lease, attempt=0
> on file=hdfs://
> 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log
> after 4ms
> 2018-08-14 17:25:03,188 WARN [Thread-14] util.CommonFSUtils: Your Hadoop
> installation does not include the StreamCapabilities class from HDFS-11644,
> so we will skip checking if any FSDataOutputStreams actually support
> hflush/hsync. If you are running on top of HDFS this probably just means
> you have an older version and this can be ignored. If you are running on
> top of an alternate FileSystem implementation you should manually verify
> that hflush and hsync are implemented; otherwise you risk data loss and
> hard to diagnose errors when our assumptions are violated.
> 2018-08-14 17:25:03,189 INFO [Thread-14] wal.WALProcedureStore: Rolled new
> Procedure Store WAL, id=5
> 2018-08-14 17:25:03,190 INFO [Thread-14] procedure2.ProcedureExecutor:
> Recovered WALProcedureStore lease in 42msec
> 2018-08-14 17:25:03,224 INFO [Thread-14] procedure2.ProcedureExecutor:
> Loaded WALProcedureStore in 33msec
> 2018-08-14 17:25:03,224 INFO [Thread-14]
> procedure2.RemoteProcedureDispatcher: Instantiated, coreThreads=128
> (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150
> 2018-08-14 17:25:03,261 WARN [Thread-14] master.ServerManager: Expiration
> of hbase-116,16020,1534237430655 but server not online
> 2018-08-14 17:25:03,261 INFO [Thread-14] master.ServerManager: Processing
> expiration of hbase-116,16020,1534237430655 on hbase-114,16000,1534238700547
> 2018-08-14 17:25:03,481 WARN [Thread-14] master.ServerManager: Expiration
> of hbase-115,16020,1534237425729 but server not online
> 2018-08-14 17:25:03,481 INFO [Thread-14] master.ServerManager: Processing
> expiration of hbase-115,16020,1534237425729 on hbase-114,16000,1534238700547
> 2018-08-14 17:25:03,622 INFO [Thread-14] balancer.BaseLoadBalancer:
> slop=0.001, tablesOnMaster=false, systemTablesOnMaster=false
> 2018-08-14 17:25:03,629 INFO [Thread-14] balancer.StochasticLoadBalancer:
> Loaded config; maxSteps=1000000, stepsPerRegion=800, maxRunningTime=30000,
> isByTable=false, etc.
> 2018-08-14 17:25:03,669 INFO [Thread-14] master.HMaster: Active/primary
> master=hbase-114,16000,1534238700547, sessionid=0x10004dac1970000, setting
> cluster-up flag (Was=false)
> 2018-08-14 17:25:03,771 INFO [PEWorker-4] procedure.ServerCrashProcedure:
> Start pid=12, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure
> server=hbase-115,16020,1534237425729, splitWal=true, meta=false
> 2018-08-14 17:25:03,772 INFO [Thread-14] procedure2.TimeoutExecutorThread:
> ADDED pid=-1, state=WAITING_TIMEOUT;
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$CompletedProcedureCleaner;
> timeout=30000, timestamp=1534238733772
> 2018-08-14 17:25:03,774 INFO [PEWorker-3] procedure.ServerCrashProcedure:
> Start pid=11, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure
> server=hbase-116,16020,1534237430655, splitWal=true, meta=false
> 2018-08-14 17:25:03,775 INFO [Thread-14] cleaner.CleanerChore: Cleaner
> pool size is 1
> 2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.RecoverableZooKeeper:
> Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=
> 192.168.101.114:2181
> 2018-08-14 17:25:03,776 INFO [Thread-14] zookeeper.ZooKeeper: Initiating
> client connection, connectString=192.168.101.114:2181
> sessionTimeout=90000
> watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@46bb7405
> 2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server
> 192.168.101.114/192.168.101.114:2181. Will not attempt to authenticate
> using SASL (unknown error)
> 2018-08-14 17:25:03,777 INFO [Thread-14-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Socket connection established to
> 192.168.101.114/192.168.101.114:2181, initiating session
> 2018-08-14 17:25:03,777 INFO [Thread-14] cleaner.LogCleaner: Creating
> OldWALs cleaners with size=2
> 2018-08-14 17:25:03,780 INFO [Thread-14-SendThread(192.168.101.114:2181)]
> zookeeper.ClientCnxn: Session establishment complete on server
> 192.168.101.114/192.168.101.114:2181, sessionid = 0x10004dac1970006,
> negotiated timeout = 40000
> 2018-08-14 17:25:03,967 INFO
> [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000]
> master.ServerManager: Registering regionserver=hbase-116,16020,1534238701517
> 2018-08-14 17:25:03,967 INFO
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000]
> master.ServerManager: Registering regionserver=hbase-115,16020,1534238702258
> 2018-08-14 17:25:04,022 INFO [RegionServerTracker-0]
> master.RegionServerTracker: RegionServer ephemeral node created, adding
> [hbase-116,16020,1534238701517]
> 2018-08-14 17:25:04,023 INFO [RegionServerTracker-0]
> master.RegionServerTracker: RegionServer ephemeral node created, adding
> [hbase-115,16020,1534238702258]
> 2018-08-14 17:25:33,877 INFO [WALProcedureStoreSyncThread]
> wal.ProcedureWALFile: Archiving hdfs://
> 192.168.101.114:9000/hbase/MasterProcWALs/pv2-00000000000000000004.log to
> hdfs://192.168.101.114:9000/hbase/oldWALs/pv2-00000000000000000004.log
> 2018-08-14
> <http://192.168.101.114:9000/hbase/oldWALs/pv2-00000000000000000004.log2018-08-14>
> 17:26:59,875 WARN [qtp1304765785-87] servlet.ServletHandler: /master-status
> org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
> at
> org.apache.hadoop.hbase.master.HMaster.isInMaintenanceMode(HMaster.java:2890)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:277)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:395)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:386)
> at
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:81)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at org.eclipse.jetty.io
> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:748)