You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Vidhyashankar Venkataraman <vi...@yahoo-inc.com> on 2010/06/14 20:08:00 UTC
Bulk load problems..
I tried dumping my own Hfiles (similar to HFileOutputFormat: open an Hfile.writer, append the key value pairs and then close the writer) and tried loading them using the ruby script.. I had altered loadtable.rb to modify the block size for the column family.
The script reported no errors. But on checking with the web interface, (table's name is DocData)
http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
I got this:
org.apache.hadoop.hbase.client.NoServerForRegionException: No server address listed in .META. for region DocData,,1276537980228
And in each region server, I got this
2010-06-14 17:57:48,846 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening DocData,0000133832,1276537980224
java.lang.IllegalAccessError: Has not been initialized
at org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
at java.lang.Thread.run(Thread.java:619)
I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
Vidhya
Re: Bulk load problems..
Posted by Todd Lipcon <to...@cloudera.com>.
On Mon, Jun 14, 2010 at 12:37 PM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> > In trunk there's a feature whereby the metadata can include a special
> "this
> > is a bulk load" entry. In 0.20, you have to pick some sequence number -
> I'd
> > go with something like 0 for a bulk load. Check out what
> HFileOutputFormat
> > does and copy that :)
>
> I did that initially but it doesn't compile: StoreFile in 0.20.3 doesn't
> recognize MAJOR_COMPACTION_KEY or BULKLOAD_TIME_KEY..
>
>
I meant to check out what HFileOutputFormat does in the 0.20 branch. As you
noticed, the new code in trunk is substantially different (and requires some
large-ish changes to Store, StoreFile, etc to support it)
-Todd
> Let me try 0 and see what happens..
>
> Vidhya
>
> On 6/14/10 12:32 PM, "Todd Lipcon" <to...@cloudera.com> wrote:
>
> On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
> vidhyash@yahoo-inc.com> wrote:
>
> > >> Most likely you are not appending the correct metadata entries (in
> > >> particular the log sequence ID)
> > Since I am not creating any logs, the max log sequence ID should be -1,
> > isnt it?
> >
> >
> In trunk there's a feature whereby the metadata can include a special "this
> is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
> go with something like 0 for a bulk load. Check out what HFileOutputFormat
> does and copy that :)
>
> -Todd
>
>
> >
> > On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman" <
> vidhyash@yahoo-inc.com>
> > wrote:
> >
> > >> Most likely you are not appending the correct metadata entries (in
> > >> particular the log sequence ID)
> > Can you elaborate? What additional info do I need to add when I
> > create/close Hfiles?
> >
> > Thank you
> > vidhya
> >
> >
> > On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
> >
> > On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
> > vidhyash@yahoo-inc.com> wrote:
> >
> > > I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> > > Hfile.writer, append the key value pairs and then close the writer) and
> > > tried loading them using the ruby script.. I had altered loadtable.rb
> to
> > > modify the block size for the column family.
> > >
> > > The script reported no errors. But on checking with the web interface,
> > > (table's name is DocData)
> > > http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
> > >
> > > I got this:
> > > org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> > > address listed in .META. for region DocData,,1276537980228
> > >
> > > And in each region server, I got this
> > > 2010-06-14 17:57:48,846 ERROR
> > > org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> > > DocData,0000133832,1276537980224
> > > java.lang.IllegalAccessError: Has not been initialized
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> > > at
> > >
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> > > at
> > org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> > > at
> > >
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> > > at java.lang.Thread.run(Thread.java:619)
> > >
> > >
> > > I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
> > >
> > >
> > Most likely you are not appending the correct metadata entries (in
> > particular the log sequence ID)
> >
> > -Todd
> >
> >
> >
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
> >
> >
> >
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
>
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Vidhyashankar Venkataraman <vi...@yahoo-inc.com>.
> In trunk there's a feature whereby the metadata can include a special "this
> is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
> go with something like 0 for a bulk load. Check out what HFileOutputFormat
> does and copy that :)
I did that initially but it doesn't compile: StoreFile in 0.20.3 doesn't recognize MAJOR_COMPACTION_KEY or BULKLOAD_TIME_KEY..
Let me try 0 and see what happens..
Vidhya
On 6/14/10 12:32 PM, "Todd Lipcon" <to...@cloudera.com> wrote:
On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Since I am not creating any logs, the max log sequence ID should be -1,
> isnt it?
>
>
In trunk there's a feature whereby the metadata can include a special "this
is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
go with something like 0 for a bulk load. Check out what HFileOutputFormat
does and copy that :)
-Todd
>
> On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman" <vi...@yahoo-inc.com>
> wrote:
>
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Can you elaborate? What additional info do I need to add when I
> create/close Hfiles?
>
> Thank you
> vidhya
>
>
> On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
>
> On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
> vidhyash@yahoo-inc.com> wrote:
>
> > I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> > Hfile.writer, append the key value pairs and then close the writer) and
> > tried loading them using the ruby script.. I had altered loadtable.rb to
> > modify the block size for the column family.
> >
> > The script reported no errors. But on checking with the web interface,
> > (table's name is DocData)
> > http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
> >
> > I got this:
> > org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> > address listed in .META. for region DocData,,1276537980228
> >
> > And in each region server, I got this
> > 2010-06-14 17:57:48,846 ERROR
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> > DocData,0000133832,1276537980224
> > java.lang.IllegalAccessError: Has not been initialized
> > at
> >
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> > at
> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> > at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> > at java.lang.Thread.run(Thread.java:619)
> >
> >
> > I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
> >
> >
> Most likely you are not appending the correct metadata entries (in
> particular the log sequence ID)
>
> -Todd
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
>
>
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Vidhyashankar Venkataraman <vi...@yahoo-inc.com>.
Spoke too soon.. Thanks..
On 6/14/10 12:32 PM, "Todd Lipcon" <to...@cloudera.com> wrote:
On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Since I am not creating any logs, the max log sequence ID should be -1,
> isnt it?
>
>
In trunk there's a feature whereby the metadata can include a special "this
is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
go with something like 0 for a bulk load. Check out what HFileOutputFormat
does and copy that :)
-Todd
>
> On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman" <vi...@yahoo-inc.com>
> wrote:
>
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Can you elaborate? What additional info do I need to add when I
> create/close Hfiles?
>
> Thank you
> vidhya
>
>
> On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
>
> On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
> vidhyash@yahoo-inc.com> wrote:
>
> > I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> > Hfile.writer, append the key value pairs and then close the writer) and
> > tried loading them using the ruby script.. I had altered loadtable.rb to
> > modify the block size for the column family.
> >
> > The script reported no errors. But on checking with the web interface,
> > (table's name is DocData)
> > http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
> >
> > I got this:
> > org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> > address listed in .META. for region DocData,,1276537980228
> >
> > And in each region server, I got this
> > 2010-06-14 17:57:48,846 ERROR
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> > DocData,0000133832,1276537980224
> > java.lang.IllegalAccessError: Has not been initialized
> > at
> >
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> > at
> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> > at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> > at java.lang.Thread.run(Thread.java:619)
> >
> >
> > I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
> >
> >
> Most likely you are not appending the correct metadata entries (in
> particular the log sequence ID)
>
> -Todd
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
>
>
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Todd Lipcon <to...@cloudera.com>.
On Mon, Jun 14, 2010 at 12:14 PM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Since I am not creating any logs, the max log sequence ID should be -1,
> isnt it?
>
>
In trunk there's a feature whereby the metadata can include a special "this
is a bulk load" entry. In 0.20, you have to pick some sequence number - I'd
go with something like 0 for a bulk load. Check out what HFileOutputFormat
does and copy that :)
-Todd
>
> On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman" <vi...@yahoo-inc.com>
> wrote:
>
> >> Most likely you are not appending the correct metadata entries (in
> >> particular the log sequence ID)
> Can you elaborate? What additional info do I need to add when I
> create/close Hfiles?
>
> Thank you
> vidhya
>
>
> On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
>
> On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
> vidhyash@yahoo-inc.com> wrote:
>
> > I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> > Hfile.writer, append the key value pairs and then close the writer) and
> > tried loading them using the ruby script.. I had altered loadtable.rb to
> > modify the block size for the column family.
> >
> > The script reported no errors. But on checking with the web interface,
> > (table's name is DocData)
> > http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
> >
> > I got this:
> > org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> > address listed in .META. for region DocData,,1276537980228
> >
> > And in each region server, I got this
> > 2010-06-14 17:57:48,846 ERROR
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> > DocData,0000133832,1276537980224
> > java.lang.IllegalAccessError: Has not been initialized
> > at
> >
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> > at
> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> > at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> > at java.lang.Thread.run(Thread.java:619)
> >
> >
> > I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
> >
> >
> Most likely you are not appending the correct metadata entries (in
> particular the log sequence ID)
>
> -Todd
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
>
>
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Vidhyashankar Venkataraman <vi...@yahoo-inc.com>.
>> Most likely you are not appending the correct metadata entries (in
>> particular the log sequence ID)
Since I am not creating any logs, the max log sequence ID should be -1, isnt it?
On 6/14/10 11:36 AM, "Vidhyashankar Venkataraman" <vi...@yahoo-inc.com> wrote:
>> Most likely you are not appending the correct metadata entries (in
>> particular the log sequence ID)
Can you elaborate? What additional info do I need to add when I create/close Hfiles?
Thank you
vidhya
On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> Hfile.writer, append the key value pairs and then close the writer) and
> tried loading them using the ruby script.. I had altered loadtable.rb to
> modify the block size for the column family.
>
> The script reported no errors. But on checking with the web interface,
> (table's name is DocData)
> http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
>
> I got this:
> org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> address listed in .META. for region DocData,,1276537980228
>
> And in each region server, I got this
> 2010-06-14 17:57:48,846 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> DocData,0000133832,1276537980224
> java.lang.IllegalAccessError: Has not been initialized
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> at java.lang.Thread.run(Thread.java:619)
>
>
> I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
>
>
Most likely you are not appending the correct metadata entries (in
particular the log sequence ID)
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Vidhyashankar Venkataraman <vi...@yahoo-inc.com>.
>> Most likely you are not appending the correct metadata entries (in
>> particular the log sequence ID)
Can you elaborate? What additional info do I need to add when I create/close Hfiles?
Thank you
vidhya
On 6/14/10 11:22 AM, "Todd Lipcon" <to...@cloudera.com> wrote:
On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> Hfile.writer, append the key value pairs and then close the writer) and
> tried loading them using the ruby script.. I had altered loadtable.rb to
> modify the block size for the column family.
>
> The script reported no errors. But on checking with the web interface,
> (table's name is DocData)
> http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
>
> I got this:
> org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> address listed in .META. for region DocData,,1276537980228
>
> And in each region server, I got this
> 2010-06-14 17:57:48,846 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> DocData,0000133832,1276537980224
> java.lang.IllegalAccessError: Has not been initialized
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> at java.lang.Thread.run(Thread.java:619)
>
>
> I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
>
>
Most likely you are not appending the correct metadata entries (in
particular the log sequence ID)
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
Re: Bulk load problems..
Posted by Todd Lipcon <to...@cloudera.com>.
On Mon, Jun 14, 2010 at 11:08 AM, Vidhyashankar Venkataraman <
vidhyash@yahoo-inc.com> wrote:
> I tried dumping my own Hfiles (similar to HFileOutputFormat: open an
> Hfile.writer, append the key value pairs and then close the writer) and
> tried loading them using the ruby script.. I had altered loadtable.rb to
> modify the block size for the column family.
>
> The script reported no errors. But on checking with the web interface,
> (table's name is DocData)
> http://b5120231.yst.yahoo.net:60010/table.jsp?name=DocData
>
> I got this:
> org.apache.hadoop.hbase.client.NoServerForRegionException: No server
> address listed in .META. for region DocData,,1276537980228
>
> And in each region server, I got this
> 2010-06-14 17:57:48,846 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Error opening
> DocData,0000133832,1276537980224
> java.lang.IllegalAccessError: Has not been initialized
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.getMaxSequenceId(StoreFile.java:216)
> at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:417)
> at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1549)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:312)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1564)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1531)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1451)
> at java.lang.Thread.run(Thread.java:619)
>
>
> I am using Hbase 0.20.3... Have I made a mistake while dumping Hfiles?
>
>
Most likely you are not appending the correct metadata entries (in
particular the log sequence ID)
-Todd
--
Todd Lipcon
Software Engineer, Cloudera