You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by sunweiwei <su...@asiainfo-linkage.com> on 2014/04/22 11:33:01 UTC

答复: oldWALs too large

Hi
I have seen HBASE-3489, I'm not sure ,maybe it not this problem.
And I find some error message in hamster log,  like " won't delete any more
files in:hdfs://hdpcluster/apps/hbase/data/oldWALs " .  
Is it a problem?

2014-04-16 17:24:56,305 DEBUG [338010897@qtp-1369711350-4]
catalog.CatalogTracker: Stopping catalog tracker
org.apache.hadoop.hbase.catalog.CatalogTracker@8611b5c
2014-04-16 17:24:56,306 INFO  [338010897@qtp-1369711350-4]
zookeeper.ZooKeeper: Session: 0x24564a8779c0168 closed
2014-04-16 17:24:56,306 INFO  [338010897@qtp-1369711350-4-EventThread]
zookeeper.ClientCnxn: EventThread shut down
2014-04-16 18:08:26,637 WARN  [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=hadoop02:2181,hadoop01:2181,hadoop03:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
2014-04-16 18:08:26,637 INFO  [master:hadoop01:60000.oldLogCleaner]
util.RetryCounter: Sleeping 1000ms before retry #0...
2014-04-16 18:08:27,637 WARN  [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=hadoop02:2181,hadoop01:2181,hadoop03:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
2014-04-16 18:08:27,638 INFO  [master:hadoop01:60000.oldLogCleaner]
util.RetryCounter: Sleeping 2000ms before retry #1...
2014-04-16 18:08:29,638 WARN  [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=hadoop02:2181,hadoop01:2181,hadoop03:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
2014-04-16 18:08:29,638 INFO  [master:hadoop01:60000.oldLogCleaner]
util.RetryCounter: Sleeping 4000ms before retry #2...
2014-04-16 18:08:33,638 WARN  [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=hadoop02:2181,hadoop01:2181,hadoop03:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
2014-04-16 18:08:33,639 INFO  [master:hadoop01:60000.oldLogCleaner]
util.RetryCounter: Sleeping 8000ms before retry #3...
2014-04-16 18:08:41,639 WARN  [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,
quorum=hadoop02:2181,hadoop01:2181,hadoop03:2181,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
2014-04-16 18:08:41,639 ERROR [master:hadoop01:60000.oldLogCleaner]
zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4
attempts
2014-04-16 18:08:41,639 WARN  [master:hadoop01:60000.oldLogCleaner]
master.ReplicationLogCleaner: Aborting ReplicationLogCleaner because Failed
to get list of replicators
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase-unsecure/replication/rs
	at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
	at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468)
	at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(Recoverab
leZooKeeper.java:273)
	at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:573
)
	at
org.apache.hadoop.hbase.replication.ReplicationStateZKBase.getListOfReplicat
ors(ReplicationStateZKBase.java:79)
	at org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.
loadHLogsFromQueues(ReplicationLogCleaner.java:88)
	at org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.
getDeletableFiles(ReplicationLogCleaner.java:67)
	at
org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(Clea
nerChore.java:233)
	at
org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(Cl
eanerChore.java:157)
	at
org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:
124)
	at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
	at java.lang.Thread.run(Thread.java:662)
2014-04-16 18:08:41,640 INFO  [master:hadoop01:60000.oldLogCleaner]
master.ReplicationLogCleaner: Stopping
replicationLogCleaner-0x14564a8dff30178, quorum=hadoop02:2181,hadoop01:2181,
hadoop03:2181, baseZNode=/hbase-unsecure
2014-04-16 18:08:41,640 DEBUG [master:hadoop01:60000.oldLogCleaner]
master.ReplicationLogCleaner: Didn't find any region server that replicates,
won't prevent any deletions.
2014-04-16 18:09:26,639 WARN  [master:hadoop01:60000.oldLogCleaner] cleaner.
CleanerChore: A file cleanermaster:hadoop01:60000.oldLogCleaner is stopped,
won't delete any more files in:hdfs://hdpcluster/apps/hbase/data/oldWALs
2014-04-16 18:10:26,640 WARN  [master:hadoop01:60000.oldLogCleaner] cleaner.
CleanerChore: A file cleanermaster:hadoop01:60000.oldLogCleaner is stopped,
won't delete any more files in:hdfs://hdpcluster/apps/hbase/data/oldWALs
2014-04-16 18:11:26,639 WARN  [master:hadoop01:60000.oldLogCleaner] cleaner.
CleanerChore: A file cleanermaster:hadoop01:60000.oldLogCleaner is stopped,
won't delete any more files in:hdfs://hdpcluster/apps/hbase/data/oldWALs


-----邮件原件-----
发件人: Rabbit's Foot [mailto:rabbitsfoot@is-land.com.tw] 
发送时间: 2014年4月22日 16:16
收件人: user@hbase.apache.org
主题: Re: oldWALs too large

HI

please reference HBASE-3489.


cheers


2014-04-22 14:58 GMT+08:00 sunweiwei <su...@asiainfo-linkage.com>:

>  Hi
>
> I'm using hbase0.96.0, with 1 hmaster,3 regionservers.
>
> Write request is About 1~10w/s.
>
>
>
> Today I found HBase Master Hangs ,Regionservers dead and oldWALs dir is
> Very Large.
>
> /apps/hbase/data/data is about 800G. /apps/hbase/data/oldWALs is about
> 4.2T.
>
> This cause HDFS Full.
>
>
>
> any suggestion will be appreciated. Thanks.
>