You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Geoff Hendrey <gh...@decarta.com> on 2011/08/06 21:12:23 UTC

corrupt .logs block

Hey -

 

I've got a corrupt HDFS block in a region server's ".logs" directory.
Also, "hbase hbck" reports 5 inconsistencies. 

 

.META. shows a region as offline, and it won't come online. What are my
options. "hbck -fix" didn't do anything. It's not "live" data and I
don't mind losing it. However, if I delete the corrupt HDFS .logs files,
would it screw up HBase? I don't care if I lose some writes from the
log. I just don't want to totally bork hbase.

 

-geoff


Re: corrupt .logs block

Posted by Harsh J <ha...@cloudera.com>.
In the HBase shell, the command is 'assign' that executes this. Takes a second param 'true' for the force part. Lookup its help message in shell, that'd be enough I think.

On 07-Aug-2011, at 11:11 AM, Geoff Hendrey wrote:

> I'd like to try it, but looks like I need to learn Ruby first? 
> 
> Sent from my iPad
> 
> On Aug 6, 2011, at 12:42 PM, "Ted Yu" <yu...@gmail.com> wrote:
> 
>> Have you tried calling assign on this region ?
>>   # Assign a region
>>   def assign(region_name, force)
>>     @admin.assign(region_name.to_java_bytes,
>> java.lang.Boolean::valueOf(force))
>>   end
>> 
>> Cheers
>> 
>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com> wrote:
>> 
>>> Hey -
>>> 
>>> 
>>> 
>>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>>> Also, "hbase hbck" reports 5 inconsistencies.
>>> 
>>> 
>>> 
>>> .META. shows a region as offline, and it won't come online. What are my
>>> options. "hbck -fix" didn't do anything. It's not "live" data and I
>>> don't mind losing it. However, if I delete the corrupt HDFS .logs files,
>>> would it screw up HBase? I don't care if I lose some writes from the
>>> log. I just don't want to totally bork hbase.
>>> 
>>> 
>>> 
>>> -geoff
>>> 
>>> 


Re: corrupt .logs block

Posted by Geoff Hendrey <gh...@decarta.com>.
I'd like to try it, but looks like I need to learn Ruby first? 

Sent from my iPad

On Aug 6, 2011, at 12:42 PM, "Ted Yu" <yu...@gmail.com> wrote:

> Have you tried calling assign on this region ?
>    # Assign a region
>    def assign(region_name, force)
>      @admin.assign(region_name.to_java_bytes,
> java.lang.Boolean::valueOf(force))
>    end
> 
> Cheers
> 
> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com> wrote:
> 
>> Hey -
>> 
>> 
>> 
>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>> Also, "hbase hbck" reports 5 inconsistencies.
>> 
>> 
>> 
>> .META. shows a region as offline, and it won't come online. What are my
>> options. "hbck -fix" didn't do anything. It's not "live" data and I
>> don't mind losing it. However, if I delete the corrupt HDFS .logs files,
>> would it screw up HBase? I don't care if I lose some writes from the
>> log. I just don't want to totally bork hbase.
>> 
>> 
>> 
>> -geoff
>> 
>> 

Re: corrupt .logs block

Posted by Ted Yu <yu...@gmail.com>.
Have you tried calling assign on this region ?
    # Assign a region
    def assign(region_name, force)
      @admin.assign(region_name.to_java_bytes,
java.lang.Boolean::valueOf(force))
    end

Cheers

On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com> wrote:

> Hey -
>
>
>
> I've got a corrupt HDFS block in a region server's ".logs" directory.
> Also, "hbase hbck" reports 5 inconsistencies.
>
>
>
> .META. shows a region as offline, and it won't come online. What are my
> options. "hbck -fix" didn't do anything. It's not "live" data and I
> don't mind losing it. However, if I delete the corrupt HDFS .logs files,
> would it screw up HBase? I don't care if I lose some writes from the
> log. I just don't want to totally bork hbase.
>
>
>
> -geoff
>
>

Re: corrupt .logs block

Posted by Stack <st...@duboce.net>.
On Thu, Aug 11, 2011 at 10:36 AM, Geoff Hendrey <gh...@decarta.com> wrote:
> so I delete the corrpupt .logs files. OK, fine no more issue there. But a handful of regions in a very large table (2000+ regions) are offline (".META." says offline=true).
>

If you enable the table, does the status get changed?  If you unassign
this region -- will probably fail -- and then assign it, what happens?
 Is it possible that this region was on a 0.20 hbase?

St.Ack

RE: corrupt .logs block

Posted by Rohit Nigam <rn...@decarta.com>.
Hi
We are having a issue while running the job as it hangs forever at 99%
(reducer) inserting into the table , hbase  hbck -details  finds
inconsistency in the table I am trying to insert data as :--

Chain of regions in table NAM_CLUSTERKEYS3 is broken; edges does not
contain
00000000|00000000000000000USA|0000240b|000000000000000LAKE,|002ff027|000
0000000CONSULTING|006e8f90|000000000000000000NY:0
I tried hbase hbck -fix  but does not help.

Tried scanning  the '.META' table

scan
'.META.',{STARTROW=>'NAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000
240b|000000000000000LAKE',LIMIT=>1}

Getting this error:--


ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying
to contact region server doop6. for region .META.,,1, row
'NAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000240b|000000000000000
LAKE', but failed after 7 attempts.
Exceptions:
java.io.IOException: java.io.IOException:
java.lang.IllegalArgumentException: No 44 in
<WKNAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000240b|0000000000000
00LAK????????>, length=58, offset=27
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE
(HRegionServer.java:992)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE
(HRegionServer.java:981)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionSe
rver.java:1783)
        at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:103
9)
Caused by: java.lang.IllegalArgumentException: No 44 in
<WKNAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000240b|0000000000000
00LAK????????>, length=58, offset=27
        at
org.apache.hadoop.hbase.KeyValue.getRequiredDelimiterInReverse(KeyValue.
java:1281)
        at
org.apache.hadoop.hbase.KeyValue$MetaKeyComparator.compareRows(KeyValue.
java:1827)
        at
org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(KeyValue.java:186
6)
        at
org.apache.hadoop.hbase.util.Bytes.binarySearch(Bytes.java:1159)
        at
org.apache.hadoop.hbase.io.hfile.HFile$BlockIndex.blockContainingKey(HFi
le.java:1618)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.blockContainingKey(HFile.j
ava:918)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:
1296)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(Stor
eFileScanner.java:136)
        at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScan
ner.java:96)
        at
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.ja
va:77)
        at
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1341)
        at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegio
n.java:2269)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(
HRegion.java:1126)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:111
8)
        at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:110
2)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionSe
rver.java:1781)
        ... 5 more

java.io.IOException: java.io.IOException:
java.lang.IllegalArgumentException: No 44 in
<WKNAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000240b|0000000000000
00LAK????????>, length=58, offset=27
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE
(HRegionServer.java:992)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE
(HRegionServer.java:981)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionSe
rver.java:1783)
        at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:103
9)
Caused by: java.lang.IllegalArgumentException: No 44 in
<WKNAM_CLUSTERKEYS3,00000000|00000000000000000USA|0000240b|0000000000000
00LAK????????>, length=58, offset=27
        at
org.apache.hadoop.hbase.KeyValue.getRequiredDelimiterInReverse(KeyValue.
java:1281)
        at
org.apache.hadoop.hbase.KeyValue$MetaKeyComparator.compareRows(KeyValue.
java:1827)
        at
org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(KeyValue.java:186
6)
        at
org.apache.hadoop.hbase.util.Bytes.binarySearch(Bytes.java:1159)
        at
org.apache.hadoop.hbase.io.hfile.HFile$BlockIndex.blockContainingKey(HFi
le.java:1618)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.blockContaini

How can I fix the problem in the table . Any help would be appreciated. 

Rohit

-----Original Message-----
From: Geoff Hendrey 
Sent: Thursday, August 11, 2011 2:22 PM
To: Jinsong Hu; user@hbase.apache.org
Subject: RE: corrupt .logs block

Hey -

Our table behaves fine until we try to do a mapreduce job that reads and
writes from the table. When we try to retrieve keys from the afflicted
regions, the job just hangs forever. It's interesting because we never
get timeouts of any sort. This is different than other failures we've
seen in which we'd get expired leases. This is a critical bug for us
because it is preventing the launch of a product databuild which I have
to complete in the next week.

Does anyone have any suggestions as to how I can bring the afflicted
regions online? Worst case, delete the regions?

-geoff

-----Original Message-----
From: Jinsong Hu [mailto:jinsong_hu@hotmail.com] 
Sent: Thursday, August 11, 2011 11:47 AM
To: user@hbase.apache.org
Cc: Search
Subject: Re: corrupt .logs block

I run into same issue. I tried check_meta.rb --fix and add_table.rb, and

still get the same hbck "inconsistent" table,
however, I am able to do a rowcount for the table and there is no
problem.

Jimmy


--------------------------------------------------
From: "Geoff Hendrey" <gh...@decarta.com>
Sent: Thursday, August 11, 2011 10:36 AM
To: <us...@hbase.apache.org>
Cc: "Search" <Se...@decarta.com>
Subject: RE: corrupt .logs block

> so I delete the corrpupt .logs files. OK, fine no more issue there.
But a 
> handful of regions in a very large table (2000+ regions) are offline 
> (".META." says offline=true).
>
> How do I go about trying to get the region online, and how come
restarting 
> hbase has no effect (region still offline).
>
> Tried 'hbck -fix', no effect. Hbck simply lists the table as 
> "inconsistent".
>
> Would appreciate any advice on how to resolve this.
>
> Thanks,
> geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
Stack
> Sent: Monday, August 08, 2011 4:25 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> Well, if its a log no longer used, then you could just delete it.
> That'll get rid of the fsck complaint (True, logs are not per table so
> to be safe you'd need to flush all tables -- this would get all edits
> that the log could be carrying out into the filesystem into hfiles).
>
> St.Ack
>
> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com> 
> wrote:
>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of
how
>> to flush a table from the hbase shell. But since the "fsck /" tells
me a
>> log file is corrupt, but not which table the corruption pertains to,
>> does this mean I have to flush all my tables (I have a lot of
tables).
>>
>> -geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>> Stack
>> Sent: Monday, August 08, 2011 4:09 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>> I've got a corrupt HDFS block in a region server's ".logs"
directory.
>>
>> You see this when you do hdfs fsck?  Is the log still needed?  You
>> could do a flush across the cluster and that should do away with your
>> dependency on this log.
>>
>> St.Ack
>>
> 

Re: corrupt .logs block

Posted by Stack <st...@duboce.net>.
Do you have rows in your .META. where the info:regioninfo column is
missing Geoff?   Hack check_meta.rb to emit rows before it goes to
deserialize the hregioninfo so you can find problem row.  If no
info:regioninfo, delete it.. or change check_meta.rb to use
getHRegionInfoOrNull instead of getHRegionInfo.. and if null, just
move past this null row.

St.Ack

On Thu, Aug 11, 2011 at 10:24 PM, Geoff Hendrey <gh...@decarta.com> wrote:
> Thanks,
>
> check_meta.rb stack traces with NPE...
>
> [hroot@doop10 bin]$ hbase org.jruby.Main check_meta.rb
> Writables.java:75:in
> `org.apache.hadoop.hbase.util.Writables.getWritable':
> java.lang.NullPointerException: null (NativeException)
>        from Writables.java:119:in
> `org.apache.hadoop.hbase.util.Writables.getHRegionInfo'
>        from null:-1:in `sun.reflect.GeneratedMethodAccessor6.invoke'
>        from DelegatingMethodAccessorImpl.java:43:in
> `sun.reflect.DelegatingMethodAccessorImpl.invoke'
>        from Method.java:616:in `java.lang.reflect.Method.invoke'
>        from JavaMethod.java:196:in
> `org.jruby.javasupport.JavaMethod.invokeWithExceptionHandling'
>        from JavaMethod.java:182:in
> `org.jruby.javasupport.JavaMethod.invoke_static'
>        from JavaClass.java:371:in
> `org.jruby.javasupport.JavaClass$StaticMethodInvoker.execute'
>        from SimpleCallbackMethod.java:81:in
> `org.jruby.internal.runtime.methods.SimpleCallbackMethod.call'
>         ... 16 levels...
>        from Main.java:183:in `org.jruby.Main.runInterpreter'
>        from Main.java:120:in `org.jruby.Main.run'
>        from Main.java:95:in `org.jruby.Main.main'
> Complete Java stackTrace
> java.lang.NullPointerException
>        at
> org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:75)
>        at
> org.apache.hadoop.hbase.util.Writables.getHRegionInfo(Writables.java:119
> )
>        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
> Impl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:616)
>        at
> org.jruby.javasupport.JavaMethod.invokeWithExceptionHandling(JavaMethod.
> java:196)
>        at
> org.jruby.javasupport.JavaMethod.invoke_static(JavaMethod.java:182)
>        at
> org.jruby.javasupport.JavaClass$StaticMethodInvoker.execute(JavaClass.ja
> va:371)
>        at
> org.jruby.internal.runtime.methods.SimpleCallbackMethod.call(SimpleCallb
> ackMethod.java:81)
>        at
> org.jruby.evaluator.EvaluationState.callNode(EvaluationState.java:571)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
> 7)
>        at
> org.jruby.evaluator.EvaluationState.localAsgnNode(EvaluationState.java:1
> 254)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:28
> 6)
>        at
> org.jruby.evaluator.EvaluationState.blockNode(EvaluationState.java:533)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
> 1)
>        at
> org.jruby.evaluator.EvaluationState.whileNode(EvaluationState.java:1793)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:38
> 7)
>        at
> org.jruby.evaluator.EvaluationState.blockNode(EvaluationState.java:533)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
> 1)
>        at
> org.jruby.evaluator.EvaluationState.rootNode(EvaluationState.java:1628)
>        at
> org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:35
> 6)
>        at
> org.jruby.evaluator.EvaluationState.eval(EvaluationState.java:164)
>        at org.jruby.Ruby.eval(Ruby.java:278)
>        at org.jruby.Ruby.compileOrFallbackAndRun(Ruby.java:306)
>        at org.jruby.Main.runInterpreter(Main.java:238)
>        at org.jruby.Main.runInterpreter(Main.java:183)
>        at org.jruby.Main.run(Main.java:120)
>        at org.jruby.Main.main(Main.java:95)
>
> -----Original Message-----
> From: Jinsong Hu [mailto:jinsong_hu@hotmail.com]
> Sent: Thursday, August 11, 2011 3:18 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> as I said, run "hbase org.jruby.Main add_table.rb <table_name>" first,
> then
> run "hbase org.jruby.Main check_meta.rb --fix"
> then restart hbase.
>
> It doesn't completely solve problem for me, as hbck still complains.
> but at least it recovers all data and I can do full rowcount for the
> table.
>
>
> Jimmy.
>
> --------------------------------------------------
> From: "Geoff Hendrey" <gh...@decarta.com>
> Sent: Thursday, August 11, 2011 2:21 PM
> To: "Jinsong Hu" <ji...@hotmail.com>; <us...@hbase.apache.org>
> Subject: RE: corrupt .logs block
>
>> Hey -
>>
>> Our table behaves fine until we try to do a mapreduce job that reads
> and
>> writes from the table. When we try to retrieve keys from the afflicted
>> regions, the job just hangs forever. It's interesting because we never
>> get timeouts of any sort. This is different than other failures we've
>> seen in which we'd get expired leases. This is a critical bug for us
>> because it is preventing the launch of a product databuild which I
> have
>> to complete in the next week.
>>
>> Does anyone have any suggestions as to how I can bring the afflicted
>> regions online? Worst case, delete the regions?
>>
>> -geoff
>>
>> -----Original Message-----
>> From: Jinsong Hu [mailto:jinsong_hu@hotmail.com]
>> Sent: Thursday, August 11, 2011 11:47 AM
>> To: user@hbase.apache.org
>> Cc: Search
>> Subject: Re: corrupt .logs block
>>
>> I run into same issue. I tried check_meta.rb --fix and add_table.rb,
> and
>>
>> still get the same hbck "inconsistent" table,
>> however, I am able to do a rowcount for the table and there is no
>> problem.
>>
>> Jimmy
>>
>>
>> --------------------------------------------------
>> From: "Geoff Hendrey" <gh...@decarta.com>
>> Sent: Thursday, August 11, 2011 10:36 AM
>> To: <us...@hbase.apache.org>
>> Cc: "Search" <Se...@decarta.com>
>> Subject: RE: corrupt .logs block
>>
>>> so I delete the corrpupt .logs files. OK, fine no more issue there.
>> But a
>>> handful of regions in a very large table (2000+ regions) are offline
>>> (".META." says offline=true).
>>>
>>> How do I go about trying to get the region online, and how come
>> restarting
>>> hbase has no effect (region still offline).
>>>
>>> Tried 'hbck -fix', no effect. Hbck simply lists the table as
>>> "inconsistent".
>>>
>>> Would appreciate any advice on how to resolve this.
>>>
>>> Thanks,
>>> geoff
>>>
>>> -----Original Message-----
>>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>> Stack
>>> Sent: Monday, August 08, 2011 4:25 PM
>>> To: user@hbase.apache.org
>>> Subject: Re: corrupt .logs block
>>>
>>> Well, if its a log no longer used, then you could just delete it.
>>> That'll get rid of the fsck complaint (True, logs are not per table
> so
>>> to be safe you'd need to flush all tables -- this would get all edits
>>> that the log could be carrying out into the filesystem into hfiles).
>>>
>>> St.Ack
>>>
>>> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com>
>>> wrote:
>>>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of
>> how
>>>> to flush a table from the hbase shell. But since the "fsck /" tells
>> me a
>>>> log file is corrupt, but not which table the corruption pertains to,
>>>> does this mean I have to flush all my tables (I have a lot of
>> tables).
>>>>
>>>> -geoff
>>>>
>>>> -----Original Message-----
>>>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>>>> Stack
>>>> Sent: Monday, August 08, 2011 4:09 PM
>>>> To: user@hbase.apache.org
>>>> Subject: Re: corrupt .logs block
>>>>
>>>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey
> <gh...@decarta.com>
>>>> wrote:
>>>>> I've got a corrupt HDFS block in a region server's ".logs"
>> directory.
>>>>
>>>> You see this when you do hdfs fsck?  Is the log still needed?  You
>>>> could do a flush across the cluster and that should do away with
> your
>>>> dependency on this log.
>>>>
>>>> St.Ack
>>>>
>>>
>>
>

RE: corrupt .logs block

Posted by Geoff Hendrey <gh...@decarta.com>.
Thanks, 

check_meta.rb stack traces with NPE...

[hroot@doop10 bin]$ hbase org.jruby.Main check_meta.rb
Writables.java:75:in
`org.apache.hadoop.hbase.util.Writables.getWritable':
java.lang.NullPointerException: null (NativeException)
        from Writables.java:119:in
`org.apache.hadoop.hbase.util.Writables.getHRegionInfo'
        from null:-1:in `sun.reflect.GeneratedMethodAccessor6.invoke'
        from DelegatingMethodAccessorImpl.java:43:in
`sun.reflect.DelegatingMethodAccessorImpl.invoke'
        from Method.java:616:in `java.lang.reflect.Method.invoke'
        from JavaMethod.java:196:in
`org.jruby.javasupport.JavaMethod.invokeWithExceptionHandling'
        from JavaMethod.java:182:in
`org.jruby.javasupport.JavaMethod.invoke_static'
        from JavaClass.java:371:in
`org.jruby.javasupport.JavaClass$StaticMethodInvoker.execute'
        from SimpleCallbackMethod.java:81:in
`org.jruby.internal.runtime.methods.SimpleCallbackMethod.call'
         ... 16 levels...
        from Main.java:183:in `org.jruby.Main.runInterpreter'
        from Main.java:120:in `org.jruby.Main.run'
        from Main.java:95:in `org.jruby.Main.main'
Complete Java stackTrace
java.lang.NullPointerException
        at
org.apache.hadoop.hbase.util.Writables.getWritable(Writables.java:75)
        at
org.apache.hadoop.hbase.util.Writables.getHRegionInfo(Writables.java:119
)
        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at
org.jruby.javasupport.JavaMethod.invokeWithExceptionHandling(JavaMethod.
java:196)
        at
org.jruby.javasupport.JavaMethod.invoke_static(JavaMethod.java:182)
        at
org.jruby.javasupport.JavaClass$StaticMethodInvoker.execute(JavaClass.ja
va:371)
        at
org.jruby.internal.runtime.methods.SimpleCallbackMethod.call(SimpleCallb
ackMethod.java:81)
        at
org.jruby.evaluator.EvaluationState.callNode(EvaluationState.java:571)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
7)
        at
org.jruby.evaluator.EvaluationState.localAsgnNode(EvaluationState.java:1
254)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:28
6)
        at
org.jruby.evaluator.EvaluationState.blockNode(EvaluationState.java:533)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
1)
        at
org.jruby.evaluator.EvaluationState.whileNode(EvaluationState.java:1793)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:38
7)
        at
org.jruby.evaluator.EvaluationState.blockNode(EvaluationState.java:533)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:20
1)
        at
org.jruby.evaluator.EvaluationState.rootNode(EvaluationState.java:1628)
        at
org.jruby.evaluator.EvaluationState.evalInternal(EvaluationState.java:35
6)
        at
org.jruby.evaluator.EvaluationState.eval(EvaluationState.java:164)
        at org.jruby.Ruby.eval(Ruby.java:278)
        at org.jruby.Ruby.compileOrFallbackAndRun(Ruby.java:306)
        at org.jruby.Main.runInterpreter(Main.java:238)
        at org.jruby.Main.runInterpreter(Main.java:183)
        at org.jruby.Main.run(Main.java:120)
        at org.jruby.Main.main(Main.java:95)

-----Original Message-----
From: Jinsong Hu [mailto:jinsong_hu@hotmail.com] 
Sent: Thursday, August 11, 2011 3:18 PM
To: user@hbase.apache.org
Subject: Re: corrupt .logs block

as I said, run "hbase org.jruby.Main add_table.rb <table_name>" first,
then 
run "hbase org.jruby.Main check_meta.rb --fix"
then restart hbase.

It doesn't completely solve problem for me, as hbck still complains.
but at least it recovers all data and I can do full rowcount for the
table.


Jimmy.

--------------------------------------------------
From: "Geoff Hendrey" <gh...@decarta.com>
Sent: Thursday, August 11, 2011 2:21 PM
To: "Jinsong Hu" <ji...@hotmail.com>; <us...@hbase.apache.org>
Subject: RE: corrupt .logs block

> Hey -
>
> Our table behaves fine until we try to do a mapreduce job that reads
and
> writes from the table. When we try to retrieve keys from the afflicted
> regions, the job just hangs forever. It's interesting because we never
> get timeouts of any sort. This is different than other failures we've
> seen in which we'd get expired leases. This is a critical bug for us
> because it is preventing the launch of a product databuild which I
have
> to complete in the next week.
>
> Does anyone have any suggestions as to how I can bring the afflicted
> regions online? Worst case, delete the regions?
>
> -geoff
>
> -----Original Message-----
> From: Jinsong Hu [mailto:jinsong_hu@hotmail.com]
> Sent: Thursday, August 11, 2011 11:47 AM
> To: user@hbase.apache.org
> Cc: Search
> Subject: Re: corrupt .logs block
>
> I run into same issue. I tried check_meta.rb --fix and add_table.rb,
and
>
> still get the same hbck "inconsistent" table,
> however, I am able to do a rowcount for the table and there is no
> problem.
>
> Jimmy
>
>
> --------------------------------------------------
> From: "Geoff Hendrey" <gh...@decarta.com>
> Sent: Thursday, August 11, 2011 10:36 AM
> To: <us...@hbase.apache.org>
> Cc: "Search" <Se...@decarta.com>
> Subject: RE: corrupt .logs block
>
>> so I delete the corrpupt .logs files. OK, fine no more issue there.
> But a
>> handful of regions in a very large table (2000+ regions) are offline
>> (".META." says offline=true).
>>
>> How do I go about trying to get the region online, and how come
> restarting
>> hbase has no effect (region still offline).
>>
>> Tried 'hbck -fix', no effect. Hbck simply lists the table as
>> "inconsistent".
>>
>> Would appreciate any advice on how to resolve this.
>>
>> Thanks,
>> geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> Stack
>> Sent: Monday, August 08, 2011 4:25 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> Well, if its a log no longer used, then you could just delete it.
>> That'll get rid of the fsck complaint (True, logs are not per table
so
>> to be safe you'd need to flush all tables -- this would get all edits
>> that the log could be carrying out into the filesystem into hfiles).
>>
>> St.Ack
>>
>> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of
> how
>>> to flush a table from the hbase shell. But since the "fsck /" tells
> me a
>>> log file is corrupt, but not which table the corruption pertains to,
>>> does this mean I have to flush all my tables (I have a lot of
> tables).
>>>
>>> -geoff
>>>
>>> -----Original Message-----
>>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>>> Stack
>>> Sent: Monday, August 08, 2011 4:09 PM
>>> To: user@hbase.apache.org
>>> Subject: Re: corrupt .logs block
>>>
>>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey
<gh...@decarta.com>
>>> wrote:
>>>> I've got a corrupt HDFS block in a region server's ".logs"
> directory.
>>>
>>> You see this when you do hdfs fsck?  Is the log still needed?  You
>>> could do a flush across the cluster and that should do away with
your
>>> dependency on this log.
>>>
>>> St.Ack
>>>
>>
> 

Re: corrupt .logs block

Posted by Jinsong Hu <ji...@hotmail.com>.
as I said, run "hbase org.jruby.Main add_table.rb <table_name>" first, then 
run "hbase org.jruby.Main check_meta.rb --fix"
then restart hbase.

It doesn't completely solve problem for me, as hbck still complains.
but at least it recovers all data and I can do full rowcount for the table.


Jimmy.

--------------------------------------------------
From: "Geoff Hendrey" <gh...@decarta.com>
Sent: Thursday, August 11, 2011 2:21 PM
To: "Jinsong Hu" <ji...@hotmail.com>; <us...@hbase.apache.org>
Subject: RE: corrupt .logs block

> Hey -
>
> Our table behaves fine until we try to do a mapreduce job that reads and
> writes from the table. When we try to retrieve keys from the afflicted
> regions, the job just hangs forever. It's interesting because we never
> get timeouts of any sort. This is different than other failures we've
> seen in which we'd get expired leases. This is a critical bug for us
> because it is preventing the launch of a product databuild which I have
> to complete in the next week.
>
> Does anyone have any suggestions as to how I can bring the afflicted
> regions online? Worst case, delete the regions?
>
> -geoff
>
> -----Original Message-----
> From: Jinsong Hu [mailto:jinsong_hu@hotmail.com]
> Sent: Thursday, August 11, 2011 11:47 AM
> To: user@hbase.apache.org
> Cc: Search
> Subject: Re: corrupt .logs block
>
> I run into same issue. I tried check_meta.rb --fix and add_table.rb, and
>
> still get the same hbck "inconsistent" table,
> however, I am able to do a rowcount for the table and there is no
> problem.
>
> Jimmy
>
>
> --------------------------------------------------
> From: "Geoff Hendrey" <gh...@decarta.com>
> Sent: Thursday, August 11, 2011 10:36 AM
> To: <us...@hbase.apache.org>
> Cc: "Search" <Se...@decarta.com>
> Subject: RE: corrupt .logs block
>
>> so I delete the corrpupt .logs files. OK, fine no more issue there.
> But a
>> handful of regions in a very large table (2000+ regions) are offline
>> (".META." says offline=true).
>>
>> How do I go about trying to get the region online, and how come
> restarting
>> hbase has no effect (region still offline).
>>
>> Tried 'hbck -fix', no effect. Hbck simply lists the table as
>> "inconsistent".
>>
>> Would appreciate any advice on how to resolve this.
>>
>> Thanks,
>> geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> Stack
>> Sent: Monday, August 08, 2011 4:25 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> Well, if its a log no longer used, then you could just delete it.
>> That'll get rid of the fsck complaint (True, logs are not per table so
>> to be safe you'd need to flush all tables -- this would get all edits
>> that the log could be carrying out into the filesystem into hfiles).
>>
>> St.Ack
>>
>> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of
> how
>>> to flush a table from the hbase shell. But since the "fsck /" tells
> me a
>>> log file is corrupt, but not which table the corruption pertains to,
>>> does this mean I have to flush all my tables (I have a lot of
> tables).
>>>
>>> -geoff
>>>
>>> -----Original Message-----
>>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>>> Stack
>>> Sent: Monday, August 08, 2011 4:09 PM
>>> To: user@hbase.apache.org
>>> Subject: Re: corrupt .logs block
>>>
>>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
>>> wrote:
>>>> I've got a corrupt HDFS block in a region server's ".logs"
> directory.
>>>
>>> You see this when you do hdfs fsck?  Is the log still needed?  You
>>> could do a flush across the cluster and that should do away with your
>>> dependency on this log.
>>>
>>> St.Ack
>>>
>>
> 

RE: corrupt .logs block

Posted by Geoff Hendrey <gh...@decarta.com>.
Hey -

Our table behaves fine until we try to do a mapreduce job that reads and
writes from the table. When we try to retrieve keys from the afflicted
regions, the job just hangs forever. It's interesting because we never
get timeouts of any sort. This is different than other failures we've
seen in which we'd get expired leases. This is a critical bug for us
because it is preventing the launch of a product databuild which I have
to complete in the next week.

Does anyone have any suggestions as to how I can bring the afflicted
regions online? Worst case, delete the regions?

-geoff

-----Original Message-----
From: Jinsong Hu [mailto:jinsong_hu@hotmail.com] 
Sent: Thursday, August 11, 2011 11:47 AM
To: user@hbase.apache.org
Cc: Search
Subject: Re: corrupt .logs block

I run into same issue. I tried check_meta.rb --fix and add_table.rb, and

still get the same hbck "inconsistent" table,
however, I am able to do a rowcount for the table and there is no
problem.

Jimmy


--------------------------------------------------
From: "Geoff Hendrey" <gh...@decarta.com>
Sent: Thursday, August 11, 2011 10:36 AM
To: <us...@hbase.apache.org>
Cc: "Search" <Se...@decarta.com>
Subject: RE: corrupt .logs block

> so I delete the corrpupt .logs files. OK, fine no more issue there.
But a 
> handful of regions in a very large table (2000+ regions) are offline 
> (".META." says offline=true).
>
> How do I go about trying to get the region online, and how come
restarting 
> hbase has no effect (region still offline).
>
> Tried 'hbck -fix', no effect. Hbck simply lists the table as 
> "inconsistent".
>
> Would appreciate any advice on how to resolve this.
>
> Thanks,
> geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
Stack
> Sent: Monday, August 08, 2011 4:25 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> Well, if its a log no longer used, then you could just delete it.
> That'll get rid of the fsck complaint (True, logs are not per table so
> to be safe you'd need to flush all tables -- this would get all edits
> that the log could be carrying out into the filesystem into hfiles).
>
> St.Ack
>
> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com> 
> wrote:
>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of
how
>> to flush a table from the hbase shell. But since the "fsck /" tells
me a
>> log file is corrupt, but not which table the corruption pertains to,
>> does this mean I have to flush all my tables (I have a lot of
tables).
>>
>> -geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>> Stack
>> Sent: Monday, August 08, 2011 4:09 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>> I've got a corrupt HDFS block in a region server's ".logs"
directory.
>>
>> You see this when you do hdfs fsck?  Is the log still needed?  You
>> could do a flush across the cluster and that should do away with your
>> dependency on this log.
>>
>> St.Ack
>>
> 

Re: corrupt .logs block

Posted by Stack <st...@duboce.net>.
If you can rowcount all in the table then the reported inconsistency
is probably not (hbck does a bit of false positiving).
St.Ack

On Thu, Aug 11, 2011 at 11:46 AM, Jinsong Hu <ji...@hotmail.com> wrote:
> I run into same issue. I tried check_meta.rb --fix and add_table.rb, and
> still get the same hbck "inconsistent" table,
> however, I am able to do a rowcount for the table and there is no problem.
>
> Jimmy
>
>
> --------------------------------------------------
> From: "Geoff Hendrey" <gh...@decarta.com>
> Sent: Thursday, August 11, 2011 10:36 AM
> To: <us...@hbase.apache.org>
> Cc: "Search" <Se...@decarta.com>
> Subject: RE: corrupt .logs block
>
>> so I delete the corrpupt .logs files. OK, fine no more issue there. But a
>> handful of regions in a very large table (2000+ regions) are offline
>> (".META." says offline=true).
>>
>> How do I go about trying to get the region online, and how come restarting
>> hbase has no effect (region still offline).
>>
>> Tried 'hbck -fix', no effect. Hbck simply lists the table as
>> "inconsistent".
>>
>> Would appreciate any advice on how to resolve this.
>>
>> Thanks,
>> geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of Stack
>> Sent: Monday, August 08, 2011 4:25 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> Well, if its a log no longer used, then you could just delete it.
>> That'll get rid of the fsck complaint (True, logs are not per table so
>> to be safe you'd need to flush all tables -- this would get all edits
>> that the log could be carrying out into the filesystem into hfiles).
>>
>> St.Ack
>>
>> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>>
>>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
>>> to flush a table from the hbase shell. But since the "fsck /" tells me a
>>> log file is corrupt, but not which table the corruption pertains to,
>>> does this mean I have to flush all my tables (I have a lot of tables).
>>>
>>> -geoff
>>>
>>> -----Original Message-----
>>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>>> Stack
>>> Sent: Monday, August 08, 2011 4:09 PM
>>> To: user@hbase.apache.org
>>> Subject: Re: corrupt .logs block
>>>
>>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
>>> wrote:
>>>>
>>>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>>>
>>> You see this when you do hdfs fsck?  Is the log still needed?  You
>>> could do a flush across the cluster and that should do away with your
>>> dependency on this log.
>>>
>>> St.Ack
>>>
>>
>

Re: corrupt .logs block

Posted by Jinsong Hu <ji...@hotmail.com>.
I run into same issue. I tried check_meta.rb --fix and add_table.rb, and 
still get the same hbck "inconsistent" table,
however, I am able to do a rowcount for the table and there is no problem.

Jimmy


--------------------------------------------------
From: "Geoff Hendrey" <gh...@decarta.com>
Sent: Thursday, August 11, 2011 10:36 AM
To: <us...@hbase.apache.org>
Cc: "Search" <Se...@decarta.com>
Subject: RE: corrupt .logs block

> so I delete the corrpupt .logs files. OK, fine no more issue there. But a 
> handful of regions in a very large table (2000+ regions) are offline 
> (".META." says offline=true).
>
> How do I go about trying to get the region online, and how come restarting 
> hbase has no effect (region still offline).
>
> Tried 'hbck -fix', no effect. Hbck simply lists the table as 
> "inconsistent".
>
> Would appreciate any advice on how to resolve this.
>
> Thanks,
> geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of Stack
> Sent: Monday, August 08, 2011 4:25 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> Well, if its a log no longer used, then you could just delete it.
> That'll get rid of the fsck complaint (True, logs are not per table so
> to be safe you'd need to flush all tables -- this would get all edits
> that the log could be carrying out into the filesystem into hfiles).
>
> St.Ack
>
> On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com> 
> wrote:
>> Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
>> to flush a table from the hbase shell. But since the "fsck /" tells me a
>> log file is corrupt, but not which table the corruption pertains to,
>> does this mean I have to flush all my tables (I have a lot of tables).
>>
>> -geoff
>>
>> -----Original Message-----
>> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
>> Stack
>> Sent: Monday, August 08, 2011 4:09 PM
>> To: user@hbase.apache.org
>> Subject: Re: corrupt .logs block
>>
>> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
>> wrote:
>>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>>
>> You see this when you do hdfs fsck?  Is the log still needed?  You
>> could do a flush across the cluster and that should do away with your
>> dependency on this log.
>>
>> St.Ack
>>
> 

RE: corrupt .logs block

Posted by Geoff Hendrey <gh...@decarta.com>.
so I delete the corrpupt .logs files. OK, fine no more issue there. But a handful of regions in a very large table (2000+ regions) are offline (".META." says offline=true). 

How do I go about trying to get the region online, and how come restarting hbase has no effect (region still offline).

Tried 'hbck -fix', no effect. Hbck simply lists the table as "inconsistent". 

Would appreciate any advice on how to resolve this.

Thanks,
geoff

-----Original Message-----
From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of Stack
Sent: Monday, August 08, 2011 4:25 PM
To: user@hbase.apache.org
Subject: Re: corrupt .logs block

Well, if its a log no longer used, then you could just delete it.
That'll get rid of the fsck complaint (True, logs are not per table so
to be safe you'd need to flush all tables -- this would get all edits
that the log could be carrying out into the filesystem into hfiles).

St.Ack

On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com> wrote:
> Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
> to flush a table from the hbase shell. But since the "fsck /" tells me a
> log file is corrupt, but not which table the corruption pertains to,
> does this mean I have to flush all my tables (I have a lot of tables).
>
> -geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> Stack
> Sent: Monday, August 08, 2011 4:09 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
> wrote:
>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>
> You see this when you do hdfs fsck?  Is the log still needed?  You
> could do a flush across the cluster and that should do away with your
> dependency on this log.
>
> St.Ack
>

Re: corrupt .logs block

Posted by Stack <st...@duboce.net>.
Well, if its a log no longer used, then you could just delete it.
That'll get rid of the fsck complaint (True, logs are not per table so
to be safe you'd need to flush all tables -- this would get all edits
that the log could be carrying out into the filesystem into hfiles).

St.Ack

On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <gh...@decarta.com> wrote:
> Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
> to flush a table from the hbase shell. But since the "fsck /" tells me a
> log file is corrupt, but not which table the corruption pertains to,
> does this mean I have to flush all my tables (I have a lot of tables).
>
> -geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> Stack
> Sent: Monday, August 08, 2011 4:09 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
> wrote:
>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>
> You see this when you do hdfs fsck?  Is the log still needed?  You
> could do a flush across the cluster and that should do away with your
> dependency on this log.
>
> St.Ack
>

RE: corrupt .logs block

Posted by Geoff Hendrey <gh...@decarta.com>.
Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
to flush a table from the hbase shell. But since the "fsck /" tells me a
log file is corrupt, but not which table the corruption pertains to,
does this mean I have to flush all my tables (I have a lot of tables).

-geoff

-----Original Message-----
From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
Stack
Sent: Monday, August 08, 2011 4:09 PM
To: user@hbase.apache.org
Subject: Re: corrupt .logs block

On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com>
wrote:
> I've got a corrupt HDFS block in a region server's ".logs" directory.

You see this when you do hdfs fsck?  Is the log still needed?  You
could do a flush across the cluster and that should do away with your
dependency on this log.

St.Ack

Re: corrupt .logs block

Posted by Stack <st...@duboce.net>.
On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <gh...@decarta.com> wrote:
> I've got a corrupt HDFS block in a region server's ".logs" directory.

You see this when you do hdfs fsck?  Is the log still needed?  You
could do a flush across the cluster and that should do away with your
dependency on this log.

St.Ack