You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Toni Moreno <to...@gmail.com> on 2012/04/02 16:07:59 UTC

Re: Broken HBASE ( Help Needed)

when I try count data rows I have this output after a while.--

hbase(main):001:0> list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.7600 seconds

hbase(main):002:0> count 'tsdb-uid'

ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
find region for tsdb-uid,,99999999999999 after 7 tries.



2012/4/2 Toni Moreno <to...@gmail.com>

>
> Hi guys.
>
> I have a working hbase  0.92.0 ( with OpenTSDB 1.1.0 ) A problem happened
> some days ago, and I can not access  now to may data, it seems a corruption
> data on HBASE.
>
> ¿ How can I fix this  corruption with hbase tools/commands ?
>
>
>
> HBASE log shows:
>
> 2012-04-02 14:06:12,379 INFO org.apache.hadoop.fs.FSInputChecker: Found
> checksum error: b[630, 630]=
> org.apache.hadoop.fs.ChecksumException: Checksum error:
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> at 3668992
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
>         at
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at
> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>         at
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at java.io.DataInputStream.read(DataInputStream.java:132)
>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
>         at
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
>         at
> org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
>         at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
>         at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
>         at java.lang.Thread.run(Thread.java:662)
> 2012-04-02 14:06:12,380 DEBUG
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211.temp
> 2012-04-02 14:06:12,381 WARN
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> edits file. It could be the result of a previous failed split attempt.
> Deleting
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211,
> length=1837832
> 2012-04-02 14:06:12,383 DEBUG
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210.temp
> 2012-04-02 14:06:12,383 WARN
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> edits file. It could be the result of a previous failed split attempt.
> Deleting
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210,
> length=1830526
> 2012-04-02 14:06:12,386 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 424 edits
> across 2 regions threw away edits for 0 regions; log
> file=file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> is corrupted=false progress failed=false
> 2012-04-02 14:06:12,386 WARN
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> failed, returning error
> org.apache.hadoop.fs.ChecksumException: Checksum error:
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> at 3668992
>         at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
>         at
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at
> org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>         at
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at java.io.DataInputStream.read(DataInputStream.java:132)
>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
>         at
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
>         at
> org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
>         at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
>         at
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
>         at
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
>         at
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
>         at java.lang.Thread.run(Thread.java:662)
> 2012-04-02 14:06:12,399 INFO
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: successfully
> transitioned task
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> to final state err
> 2012-04-02 14:06:12,399 INFO
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
> dwilyast02,48204,1333368305163 done with task
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> in 127ms
> 2012-04-02 14:06:12,399 INFO
> org.apache.hadoop.hbase.master.SplitLogManager: task
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> entered state err dwilyast02,48204,1333368305163
> 2012-04-02 14:06:12,400 WARN
> org.apache.hadoop.hbase.master.SplitLogManager: Error splitting
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> 2012-04-02 14:06:12,400 WARN
> org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs
> in [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> installed = 1 but only 0 done
> 2012-04-02 14:06:12,400 WARN
> org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting of
> [dwilyast02,55897,1332401896263, dwilyast02,64391,1332830608263]
> java.io.IOException: error or interrupt while splitting logs in
> [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting] Task =
> installed = 1 done = 0 error = 1
>         at
> org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:268)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:276)
>         at
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:216)
>         at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:487)
>         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
>         at
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:218)
>         at java.lang.Thread.run(Thread.java:662)
> 2012-04-02 14:06:12,410 DEBUG
> org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> 2012-04-02 14:06:12,410 DEBUG
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
> departed
>
> --
>
> Att
>
> Toni Moreno
>
> 699706656
>
>
>
> *Si no quieres perderte en el olvido tan pronto como estés muerto y
> corrompido, *
>
> *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
>
>
>
> *Benjamin Franklin*
>
>


-- 

Att

Toni Moreno

699706656



*Si no quieres perderte en el olvido tan pronto como estés muerto y
corrompido, *

*escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*



*Benjamin Franklin*

Re: Broken HBASE ( Help Needed)

Posted by Toni Moreno <to...@gmail.com>.
This is the generated output. ¿ What now ? ¿ How can I recover data?

# > hbase hbck
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.2-1221870, built on 12/21/2011 20:46 GMT
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client environment:host.name
=dwilyast02
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_31
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Sun Microsystems Inc.
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.home=/opt/jdk1.6.0_31_sun_hotspot/jre
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=/opt/hbase/bin/../conf:/opt/jdk1.6.0_31_sun_hotspot//lib/tools.jar:/opt/hbase/bin/..:/opt/hbase/bin/../hbase-0.92.0.jar:/opt/hbase/bin/../hbase-0.92.0-tests.jar:/opt/hbase/bin/../lib/activation-1.1.jar:/opt/hbase/bin/../lib/asm-3.1.jar:/opt/hbase/bin/../lib/avro-1.5.3.jar:/opt/hbase/bin/../lib/avro-ipc-1.5.3.jar:/opt/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/bin/../lib/commons-cli-1.2.jar:/opt/hbase/bin/../lib/commons-codec-1.4.jar:/opt/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/hbase/bin/../lib/commons-digester-1.8.jar:/opt/hbase/bin/../lib/commons-el-1.0.jar:/opt/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/hbase/bin/../lib/commons-lang-2.5.jar:/opt/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/hbase/bin/../lib/commons-math-2.1.jar:/opt/hbase/bin/../lib/commons-net-1.4.1.jar:/opt/hbase/bin/../lib/core-3.1.1.jar:/opt/hbase/bin/../lib/guava-r09.jar:/opt/hbase/bin/../lib/hadoop-core-1.0.0.jar:/opt/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/hbase/bin/../lib/httpclient-4.0.1.jar:/opt/hbase/bin/../lib/httpcore-4.0.1.jar:/opt/hbase/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/hbase/bin/../lib/jackson-mapper-asl-1.5.5.jar:/opt/hbase/bin/../lib/jackson-xc-1.5.5.jar:/opt/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/hbase/bin/../lib/jaxb-impl-2.1.12.jar:/opt/hbase/bin/../lib/jersey-core-1.4.jar:/opt/hbase/bin/../lib/jersey-json-1.4.jar:/opt/hbase/bin/../lib/jersey-server-1.4.jar:/opt/hbase/bin/../lib/jettison-1.1.jar:/opt/hbase/bin/../lib/jetty-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/libthrift-0.7.0.jar:/opt/hbase/bin/../lib/log4j-1.2.16.jar:/opt/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/servlet-api-2.5.jar:/opt/hbase/bin/../lib/slf4j-api-1.5.8.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/hbase/bin/../lib/snappy-java-1.0.3.2.jar:/opt/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/hbase/bin/../lib/velocity-1.7.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/zookeeper-3.4.2.jar
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/opt/hbase/bin/../lib/native/Linux-amd64-64
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.18-194.el5
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client environment:user.name
=root
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:user.home=/root
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/root
12/04/03 08:28:21 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=180000 watcher=hconnection
12/04/03 08:28:21 INFO zookeeper.ClientCnxn: Opening socket connection to
server /127.0.0.1:2181
12/04/03 08:28:21 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 10227@dwilyast02
12/04/03 08:28:21 INFO zookeeper.ClientCnxn: Socket connection established
to localhost/127.0.0.1:2181, initiating session
12/04/03 08:28:21 INFO zookeeper.ClientCnxn: Session establishment complete
on server localhost/127.0.0.1:2181, sessionid = 0x13672f1ad5a0006,
negotiated timeout = 40000
Version: 0.92.0
12/04/03 08:29:22 DEBUG
client.HConnectionManager$HConnectionImplementation: Lookedup root region
location,
connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@76a9b9c;
serverName=
ERROR: Root Region or some of its attributes are null.
ERROR: Encountered fatal error. Exiting...

2012/4/2 Ted Yu <yu...@gmail.com>

> Can you run 'bin/hbase hbck' and see if there is any inconsistency ?
>
> Thanks
>
> On Mon, Apr 2, 2012 at 7:07 AM, Toni Moreno <to...@gmail.com> wrote:
>
> > when I try count data rows I have this output after a while.--
> >
> > hbase(main):001:0> list
> > TABLE
> > tsdb
> > tsdb-uid
> > 2 row(s) in 0.7600 seconds
> >
> > hbase(main):002:0> count 'tsdb-uid'
> >
> > ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable
> to
> > find region for tsdb-uid,,99999999999999 after 7 tries.
> >
> >
> >
> > 2012/4/2 Toni Moreno <to...@gmail.com>
> >
> > >
> > > Hi guys.
> > >
> > > I have a working hbase  0.92.0 ( with OpenTSDB 1.1.0 ) A problem
> happened
> > > some days ago, and I can not access  now to may data, it seems a
> > corruption
> > > data on HBASE.
> > >
> > > ¿ How can I fix this  corruption with hbase tools/commands ?
> > >
> > >
> > >
> > > HBASE log shows:
> > >
> > > 2012-04-02 14:06:12,379 INFO org.apache.hadoop.fs.FSInputChecker: Found
> > > checksum error: b[630, 630]=
> > > org.apache.hadoop.fs.ChecksumException: Checksum error:
> > >
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > > at 3668992
> > >         at
> > >
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> > >         at
> > >
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> > >         at
> > > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> > >         at
> > > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> > >         at java.io.DataInputStream.read(DataInputStream.java:132)
> > >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> > >         at
> > >
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> > >         at
> > > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> > >         at java.lang.Thread.run(Thread.java:662)
> > > 2012-04-02 14:06:12,380 DEBUG
> > > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> > >
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211.temp
> > > 2012-04-02 14:06:12,381 WARN
> > > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing
> old
> > > edits file. It could be the result of a previous failed split attempt.
> > > Deleting
> > >
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211,
> > > length=1837832
> > > 2012-04-02 14:06:12,383 DEBUG
> > > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> > >
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210.temp
> > > 2012-04-02 14:06:12,383 WARN
> > > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing
> old
> > > edits file. It could be the result of a previous failed split attempt.
> > > Deleting
> > >
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210,
> > > length=1830526
> > > 2012-04-02 14:06:12,386 INFO
> > > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 424
> > edits
> > > across 2 regions threw away edits for 0 regions; log
> > >
> >
> file=file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > > is corrupted=false progress failed=false
> > > 2012-04-02 14:06:12,386 WARN
> > > org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
> > >
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > > failed, returning error
> > > org.apache.hadoop.fs.ChecksumException: Checksum error:
> > >
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > > at 3668992
> > >         at
> > >
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> > >         at
> > >
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> > >         at
> > > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> > >         at
> > > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> > >         at java.io.DataInputStream.read(DataInputStream.java:132)
> > >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> > >         at
> > >
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> > >         at
> > > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> > >         at
> > > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> > >         at java.lang.Thread.run(Thread.java:662)
> > > 2012-04-02 14:06:12,399 INFO
> > > org.apache.hadoop.hbase.regionserver.SplitLogWorker: successfully
> > > transitioned task
> > >
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > > to final state err
> > > 2012-04-02 14:06:12,399 INFO
> > > org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
> > > dwilyast02,48204,1333368305163 done with task
> > >
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > > in 127ms
> > > 2012-04-02 14:06:12,399 INFO
> > > org.apache.hadoop.hbase.master.SplitLogManager: task
> > >
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > > entered state err dwilyast02,48204,1333368305163
> > > 2012-04-02 14:06:12,400 WARN
> > > org.apache.hadoop.hbase.master.SplitLogManager: Error splitting
> > >
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > > 2012-04-02 14:06:12,400 WARN
> > > org.apache.hadoop.hbase.master.SplitLogManager: error while splitting
> > logs
> > > in
> [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> > > installed = 1 but only 0 done
> > > 2012-04-02 14:06:12,400 WARN
> > > org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting of
> > > [dwilyast02,55897,1332401896263, dwilyast02,64391,1332830608263]
> > > java.io.IOException: error or interrupt while splitting logs in
> > > [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> > Task =
> > > installed = 1 done = 0 error = 1
> > >         at
> > >
> >
> org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:268)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:276)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:216)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:487)
> > >         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:218)
> > >         at java.lang.Thread.run(Thread.java:662)
> > > 2012-04-02 14:06:12,410 DEBUG
> > > org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback:
> > deleted
> > >
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > > 2012-04-02 14:06:12,410 DEBUG
> > > org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
> > > departed
> > >
> > > --
> > >
> > > Att
> > >
> > > Toni Moreno
> > >
> > > 699706656
> > >
> > >
> > >
> > > *Si no quieres perderte en el olvido tan pronto como estés muerto y
> > > corrompido, *
> > >
> > > *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
> > >
> > >
> > >
> > > *Benjamin Franklin*
> > >
> > >
> >
> >
> > --
> >
> > Att
> >
> > Toni Moreno
> >
> > 699706656
> >
> >
> >
> > *Si no quieres perderte en el olvido tan pronto como estés muerto y
> > corrompido, *
> >
> > *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
> >
> >
> >
> > *Benjamin Franklin*
> >
>



-- 

Att

Toni Moreno

699706656



*Si no quieres perderte en el olvido tan pronto como estés muerto y
corrompido, *

*escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*



*Benjamin Franklin*

Re: Broken HBASE ( Help Needed)

Posted by Ted Yu <yu...@gmail.com>.
Can you run 'bin/hbase hbck' and see if there is any inconsistency ?

Thanks

On Mon, Apr 2, 2012 at 7:07 AM, Toni Moreno <to...@gmail.com> wrote:

> when I try count data rows I have this output after a while.--
>
> hbase(main):001:0> list
> TABLE
> tsdb
> tsdb-uid
> 2 row(s) in 0.7600 seconds
>
> hbase(main):002:0> count 'tsdb-uid'
>
> ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
> find region for tsdb-uid,,99999999999999 after 7 tries.
>
>
>
> 2012/4/2 Toni Moreno <to...@gmail.com>
>
> >
> > Hi guys.
> >
> > I have a working hbase  0.92.0 ( with OpenTSDB 1.1.0 ) A problem happened
> > some days ago, and I can not access  now to may data, it seems a
> corruption
> > data on HBASE.
> >
> > ¿ How can I fix this  corruption with hbase tools/commands ?
> >
> >
> >
> > HBASE log shows:
> >
> > 2012-04-02 14:06:12,379 INFO org.apache.hadoop.fs.FSInputChecker: Found
> > checksum error: b[630, 630]=
> > org.apache.hadoop.fs.ChecksumException: Checksum error:
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > at 3668992
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> >         at
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >         at java.io.DataInputStream.read(DataInputStream.java:132)
> >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >         at
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> >         at
> > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,380 DEBUG
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211.temp
> > 2012-04-02 14:06:12,381 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211,
> > length=1837832
> > 2012-04-02 14:06:12,383 DEBUG
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210.temp
> > 2012-04-02 14:06:12,383 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210,
> > length=1830526
> > 2012-04-02 14:06:12,386 INFO
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 424
> edits
> > across 2 regions threw away edits for 0 regions; log
> >
> file=file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > is corrupted=false progress failed=false
> > 2012-04-02 14:06:12,386 WARN
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > failed, returning error
> > org.apache.hadoop.fs.ChecksumException: Checksum error:
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > at 3668992
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> >         at
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >         at java.io.DataInputStream.read(DataInputStream.java:132)
> >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >         at
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> >         at
> > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: successfully
> > transitioned task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > to final state err
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
> > dwilyast02,48204,1333368305163 done with task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > in 127ms
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > entered state err dwilyast02,48204,1333368305163
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.SplitLogManager: Error splitting
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.SplitLogManager: error while splitting
> logs
> > in [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> > installed = 1 but only 0 done
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting of
> > [dwilyast02,55897,1332401896263, dwilyast02,64391,1332830608263]
> > java.io.IOException: error or interrupt while splitting logs in
> > [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> Task =
> > installed = 1 done = 0 error = 1
> >         at
> >
> org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:268)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:276)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:216)
> >         at
> >
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:487)
> >         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
> >         at
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:218)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,410 DEBUG
> > org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback:
> deleted
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > 2012-04-02 14:06:12,410 DEBUG
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
> > departed
> >
> > --
> >
> > Att
> >
> > Toni Moreno
> >
> > 699706656
> >
> >
> >
> > *Si no quieres perderte en el olvido tan pronto como estés muerto y
> > corrompido, *
> >
> > *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
> >
> >
> >
> > *Benjamin Franklin*
> >
> >
>
>
> --
>
> Att
>
> Toni Moreno
>
> 699706656
>
>
>
> *Si no quieres perderte en el olvido tan pronto como estés muerto y
> corrompido, *
>
> *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
>
>
>
> *Benjamin Franklin*
>