You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by A Laxmi <a....@gmail.com> on 2013/10/17 21:59:03 UTC

Hadoop HBase Pseudo mode - RegionServer disconnects after some time

Hi -

Please find the below log of HBase-master. I have tried all sorts of fixes
mentioned in various threads yet I could not overcome this issue. I made
sure I dont have 127.0.1.1 in /etc/hosts file. I pinged my localhost
(hostname) which gives back the actual IP and not 127.0.0.1 using ping -c 1
localhost. I have 'localhost' in my /etc/hostname and actual IP address
mapped to localhost.localdomain and localhost as alias - something like

/etc/hosts -

192.***.*.*** localhost.localdomain localhost

/etc/hostname -

localhost

I am using *Hadoop 0.20.205.0 and HBase 0.90.6 in Pseudo mode* for storing
crawled data from a crawler - Apache Nutch 2.2.1. I can start Hadoop and
HBase and when I do jps it shows all good, then after that when I start
Nutch crawl after about 40mins of crawling or so, I can see Nutch hanging
up while in about 4th iteration of parsing and at the same time when I do
jps in HBase, I can see everything except HRegionServer. Below is the log.

I tried all possible ways but couldn't overcome this issue. I really need
someone from HBase list to help me with this issue.


2013-10-15 02:02:08,285 DEBUG
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Pushed=56 entries
from hdfs://localhost:8020/hbase/.logs/127.0.0.1,60020,1381814216471/
127.0.0.1%3A60020.1381816329235
2013-10-15 02:02:08,285 DEBUG
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog 28 of
29: hdfs://localhost:8020/hbase/.logs/127.0.0.1,60020,1381814216471/
127.0.0.1%3A60020.1381816367672, length=64818440
2013-10-15 02:02:08,285 WARN org.apache.hadoop.hbase.util.FSUtils: Running
on HDFS without append enabled may result in data loss
2013-10-15 02:02:08,554 DEBUG org.apache.*hadoop.hbase.master.HMaster: Not
running balancer because processing dead regionserver(s): [127.0.0.1,60020*
,1381814216471]
2013-10-15 02:02:08,556 INFO org.apache.hadoop.hbase.catalo*g.CatalogTracker:
Failed verification of .META.,,1 at address=127.0.0.1:60020;
java.net.ConnectException: Connection refused*
2013-10-15 02:02:08,559 INFO org.apache.hadoop.hbase.catalog.*CatalogTracker:
Current cached META location is not valid*, resetting
2013-10-15 02:02:08,601 WARN org.apache.hadoop.*hbase.master.CatalogJanitor:
Failed scan of catalog table
org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: Timed out
(2147483647ms)*
        at
org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:390)
        at
org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:422)
        at
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
        at
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
        at
org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:120)
        at
org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:88)
        at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
2013-10-15 02:02:08,842 INFO
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: syncFs --
HDFS-200 -- not available, dfs.support.append=false
2013-10-15 02:02:08,842 DEBUG
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Creating writer
path=hdfs://localhost:8020/hbase/1_webpage/853ef78be7c0853208e865a9ff13d5fb/recovered.edits/0000000000000001556.temp
region=853ef78be7c0853208e865a9ff13d5fb
2013-10-15 02:02:09,443 DEBUG
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Pushed=39 entries
from hdfs://localhost:8020/hbase/.logs/127.0.0.1,60020,1381814216471/
127.0.0.1%3A60020.1381816367672
2013-10-15 02:02:09,444 DEBUG
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Splitting hlog 29 of
29: hdfs://localhost:8020/hbase/.logs/127.0.0.1,60020,1381814216471/
127.0.0.1%3A60020.1381816657239, length=0

Thanks for your help!