You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Ryan Rawson <ry...@gmail.com> on 2009/04/02 08:51:41 UTC

thinking about hbase 0.20

hi all,

it's been a long road, but it's time to start thinking about what will
conclusively be in 0.20.

I'll let you fight that out a bit... personally I'd be happy with hfile +
KeyValue.

But, one last thing, what is our migration story going to be?

-ryan

RE: thinking about hbase 0.20

Posted by "Jim Kellerman (POWERSET)" <Ji...@microsoft.com>.
How long do you think it will take to finish up all the things
around 1249?

1215 is essential.
1302 would be nice to have.

Beyond that hfile, KeyValue  and ZK stuff are big changes. I wouldn't
want to stuff in more than the above in a single release.

---
Jim Kellerman, Powerset (Live Search, Microsoft Corporation)


> -----Original Message-----
> From: Jonathan Gray [mailto:jlist@streamy.com]
> Sent: Thursday, April 02, 2009 1:24 PM
> To: hbase-dev@hadoop.apache.org
> Subject: RE: thinking about hbase 0.20
> 
> I personally feel very strongly about the need to finish all things
> surrounding 1249.
> 
> Erik and I have spent an enormous amount of time designing and
> re-implementing the client, api, implementation of gets/puts/deletes,
> etc...
> 
> Without these things HBase will be improved but will still be doing all
> sorts of silly things in implementation that cause problems with high
> numbers of columns, poor performance on deletes, and basically never
> taking
> advantage of "early-out" scenarios requiring entire scans in almost every
> case today.
> 
> The good news is that it's mostly done.  We're waiting to get a solid 1234
> patch committed and tested before breaking it apart again.  It's
> significant
> change but well thought out and mostly complete.  Now is the time to make
> these more radical changes, there's a full migration either way.
> 
> JG
> 
> > -----Original Message-----
> > From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> > stack
> > Sent: Wednesday, April 01, 2009 11:42 PM
> > To: hbase-dev@hadoop.apache.org
> > Subject: Re: thinking about hbase 0.20
> >
> > That'd be ideal.
> >
> > Regards what features should be in 0.20.0, we should start in weeding
> > the
> > list of 77 issues currently filed against 0.20.0 here:
> > https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mo
> > de=hide&sorter/order=DESC&sorter/field=priority&resolution=-
> > 1&pid=12310753&fixfor=12313474
> > .
> >
> > St.Ack
> >
> > On Thu, Apr 2, 2009 at 9:13 AM, Ryan Rawson <ry...@gmail.com> wrote:
> >
> > > Thinking about a migration, practically speaking this would be
> > doable:
> > >
> > > - Flush and compact everything.  Get rid of reference files from
> > region
> > > splits.
> > > - Take each mapfile (ignore the index files), read the file in, write
> > an
> > > equivalent hfile out.
> > > - Done!
> > >
> > > This can't be done while the cluster is online however.
> > >
> > > Any other suggestions?
> > >
> > > On Thu, Apr 2, 2009 at 12:09 AM, stack <st...@duboce.net> wrote:
> > >
> > > > I made HBASE-1215 as issue to cover migration from 0.19.x to
> > 0.20.0.
> > > >
> > > > We have a migration 'system' already.  You run ./bin/hbase migrate.
> > Going
> > > > from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that
> > rewrites
> > > all
> > > > hbase data to new format.  It needs to be MR for those cases where
> > data
> > > is
> > > > large.
> > > >
> > > > I thought at first that we could do lazy migration but after
> > looking at
> > > it,
> > > > keeping up two key types in the one context looked too complex.
> > > >
> > > > St.Ack
> > > >
> > > > On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com>
> > wrote:
> > > >
> > > > > hi all,
> > > > >
> > > > > it's been a long road, but it's time to start thinking about what
> > will
> > > > > conclusively be in 0.20.
> > > > >
> > > > > I'll let you fight that out a bit... personally I'd be happy with
> > hfile
> > > +
> > > > > KeyValue.
> > > > >
> > > > > But, one last thing, what is our migration story going to be?
> > > > >
> > > > > -ryan
> > > > >
> > > >
> > >
> 


Re: thinking about hbase 0.20

Posted by Nitay <ni...@gmail.com>.
I think it'd be good to have HBASE-1302 in for 0.20.

On Thu, Apr 2, 2009 at 1:23 PM, Jonathan Gray <jl...@streamy.com> wrote:

> I personally feel very strongly about the need to finish all things
> surrounding 1249.
>
> Erik and I have spent an enormous amount of time designing and
> re-implementing the client, api, implementation of gets/puts/deletes,
> etc...
>
> Without these things HBase will be improved but will still be doing all
> sorts of silly things in implementation that cause problems with high
> numbers of columns, poor performance on deletes, and basically never taking
> advantage of "early-out" scenarios requiring entire scans in almost every
> case today.
>
> The good news is that it's mostly done.  We're waiting to get a solid 1234
> patch committed and tested before breaking it apart again.  It's
> significant
> change but well thought out and mostly complete.  Now is the time to make
> these more radical changes, there's a full migration either way.
>
> JG
>
> > -----Original Message-----
> > From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> > stack
> > Sent: Wednesday, April 01, 2009 11:42 PM
> > To: hbase-dev@hadoop.apache.org
> > Subject: Re: thinking about hbase 0.20
> >
> > That'd be ideal.
> >
> > Regards what features should be in 0.20.0, we should start in weeding
> > the
> > list of 77 issues currently filed against 0.20.0 here:
> > https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mo
> > de=hide&sorter/order=DESC&sorter/field=priority&resolution=-
> > 1&pid=12310753&fixfor=12313474
> > .
> >
> > St.Ack
> >
> > On Thu, Apr 2, 2009 at 9:13 AM, Ryan Rawson <ry...@gmail.com> wrote:
> >
> > > Thinking about a migration, practically speaking this would be
> > doable:
> > >
> > > - Flush and compact everything.  Get rid of reference files from
> > region
> > > splits.
> > > - Take each mapfile (ignore the index files), read the file in, write
> > an
> > > equivalent hfile out.
> > > - Done!
> > >
> > > This can't be done while the cluster is online however.
> > >
> > > Any other suggestions?
> > >
> > > On Thu, Apr 2, 2009 at 12:09 AM, stack <st...@duboce.net> wrote:
> > >
> > > > I made HBASE-1215 as issue to cover migration from 0.19.x to
> > 0.20.0.
> > > >
> > > > We have a migration 'system' already.  You run ./bin/hbase migrate.
> > Going
> > > > from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that
> > rewrites
> > > all
> > > > hbase data to new format.  It needs to be MR for those cases where
> > data
> > > is
> > > > large.
> > > >
> > > > I thought at first that we could do lazy migration but after
> > looking at
> > > it,
> > > > keeping up two key types in the one context looked too complex.
> > > >
> > > > St.Ack
> > > >
> > > > On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com>
> > wrote:
> > > >
> > > > > hi all,
> > > > >
> > > > > it's been a long road, but it's time to start thinking about what
> > will
> > > > > conclusively be in 0.20.
> > > > >
> > > > > I'll let you fight that out a bit... personally I'd be happy with
> > hfile
> > > +
> > > > > KeyValue.
> > > > >
> > > > > But, one last thing, what is our migration story going to be?
> > > > >
> > > > > -ryan
> > > > >
> > > >
> > >
>
>

RE: thinking about hbase 0.20

Posted by Jonathan Gray <jl...@streamy.com>.
I personally feel very strongly about the need to finish all things
surrounding 1249.

Erik and I have spent an enormous amount of time designing and
re-implementing the client, api, implementation of gets/puts/deletes, etc...

Without these things HBase will be improved but will still be doing all
sorts of silly things in implementation that cause problems with high
numbers of columns, poor performance on deletes, and basically never taking
advantage of "early-out" scenarios requiring entire scans in almost every
case today.

The good news is that it's mostly done.  We're waiting to get a solid 1234
patch committed and tested before breaking it apart again.  It's significant
change but well thought out and mostly complete.  Now is the time to make
these more radical changes, there's a full migration either way.

JG

> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> stack
> Sent: Wednesday, April 01, 2009 11:42 PM
> To: hbase-dev@hadoop.apache.org
> Subject: Re: thinking about hbase 0.20
> 
> That'd be ideal.
> 
> Regards what features should be in 0.20.0, we should start in weeding
> the
> list of 77 issues currently filed against 0.20.0 here:
> https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mo
> de=hide&sorter/order=DESC&sorter/field=priority&resolution=-
> 1&pid=12310753&fixfor=12313474
> .
> 
> St.Ack
> 
> On Thu, Apr 2, 2009 at 9:13 AM, Ryan Rawson <ry...@gmail.com> wrote:
> 
> > Thinking about a migration, practically speaking this would be
> doable:
> >
> > - Flush and compact everything.  Get rid of reference files from
> region
> > splits.
> > - Take each mapfile (ignore the index files), read the file in, write
> an
> > equivalent hfile out.
> > - Done!
> >
> > This can't be done while the cluster is online however.
> >
> > Any other suggestions?
> >
> > On Thu, Apr 2, 2009 at 12:09 AM, stack <st...@duboce.net> wrote:
> >
> > > I made HBASE-1215 as issue to cover migration from 0.19.x to
> 0.20.0.
> > >
> > > We have a migration 'system' already.  You run ./bin/hbase migrate.
> Going
> > > from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that
> rewrites
> > all
> > > hbase data to new format.  It needs to be MR for those cases where
> data
> > is
> > > large.
> > >
> > > I thought at first that we could do lazy migration but after
> looking at
> > it,
> > > keeping up two key types in the one context looked too complex.
> > >
> > > St.Ack
> > >
> > > On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com>
> wrote:
> > >
> > > > hi all,
> > > >
> > > > it's been a long road, but it's time to start thinking about what
> will
> > > > conclusively be in 0.20.
> > > >
> > > > I'll let you fight that out a bit... personally I'd be happy with
> hfile
> > +
> > > > KeyValue.
> > > >
> > > > But, one last thing, what is our migration story going to be?
> > > >
> > > > -ryan
> > > >
> > >
> >


Re: thinking about hbase 0.20

Posted by stack <st...@duboce.net>.
That'd be ideal.

Regards what features should be in 0.20.0, we should start in weeding the
list of 77 issues currently filed against 0.20.0 here:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&sorter/order=DESC&sorter/field=priority&resolution=-1&pid=12310753&fixfor=12313474
.

St.Ack

On Thu, Apr 2, 2009 at 9:13 AM, Ryan Rawson <ry...@gmail.com> wrote:

> Thinking about a migration, practically speaking this would be doable:
>
> - Flush and compact everything.  Get rid of reference files from region
> splits.
> - Take each mapfile (ignore the index files), read the file in, write an
> equivalent hfile out.
> - Done!
>
> This can't be done while the cluster is online however.
>
> Any other suggestions?
>
> On Thu, Apr 2, 2009 at 12:09 AM, stack <st...@duboce.net> wrote:
>
> > I made HBASE-1215 as issue to cover migration from 0.19.x to 0.20.0.
> >
> > We have a migration 'system' already.  You run ./bin/hbase migrate. Going
> > from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that rewrites
> all
> > hbase data to new format.  It needs to be MR for those cases where data
> is
> > large.
> >
> > I thought at first that we could do lazy migration but after looking at
> it,
> > keeping up two key types in the one context looked too complex.
> >
> > St.Ack
> >
> > On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com> wrote:
> >
> > > hi all,
> > >
> > > it's been a long road, but it's time to start thinking about what will
> > > conclusively be in 0.20.
> > >
> > > I'll let you fight that out a bit... personally I'd be happy with hfile
> +
> > > KeyValue.
> > >
> > > But, one last thing, what is our migration story going to be?
> > >
> > > -ryan
> > >
> >
>

Re: thinking about hbase 0.20

Posted by Ryan Rawson <ry...@gmail.com>.
Thinking about a migration, practically speaking this would be doable:

- Flush and compact everything.  Get rid of reference files from region
splits.
- Take each mapfile (ignore the index files), read the file in, write an
equivalent hfile out.
- Done!

This can't be done while the cluster is online however.

Any other suggestions?

On Thu, Apr 2, 2009 at 12:09 AM, stack <st...@duboce.net> wrote:

> I made HBASE-1215 as issue to cover migration from 0.19.x to 0.20.0.
>
> We have a migration 'system' already.  You run ./bin/hbase migrate. Going
> from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that rewrites all
> hbase data to new format.  It needs to be MR for those cases where data is
> large.
>
> I thought at first that we could do lazy migration but after looking at it,
> keeping up two key types in the one context looked too complex.
>
> St.Ack
>
> On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com> wrote:
>
> > hi all,
> >
> > it's been a long road, but it's time to start thinking about what will
> > conclusively be in 0.20.
> >
> > I'll let you fight that out a bit... personally I'd be happy with hfile +
> > KeyValue.
> >
> > But, one last thing, what is our migration story going to be?
> >
> > -ryan
> >
>

Re: thinking about hbase 0.20

Posted by stack <st...@duboce.net>.
I made HBASE-1215 as issue to cover migration from 0.19.x to 0.20.0.

We have a migration 'system' already.  You run ./bin/hbase migrate. Going
from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that rewrites all
hbase data to new format.  It needs to be MR for those cases where data is
large.

I thought at first that we could do lazy migration but after looking at it,
keeping up two key types in the one context looked too complex.

St.Ack

On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com> wrote:

> hi all,
>
> it's been a long road, but it's time to start thinking about what will
> conclusively be in 0.20.
>
> I'll let you fight that out a bit... personally I'd be happy with hfile +
> KeyValue.
>
> But, one last thing, what is our migration story going to be?
>
> -ryan
>

Re: thinking about hbase 0.20

Posted by stack <st...@duboce.net>.
I made HBASE-1215 as issue to cover migration from 0.19.x to 0.20.0.

We have a migration 'system' already.  You run ./bin/hbase migrate. Going
from 0.19.0 to 0.20.0, we'll need to add a mapreduce job that rewrites all
hbase data to new format.  It needs to be MR for those cases where data is
large.

I thought at first that we could do lazy migration but after looking at it,
keeping up two key types in the one context looked too complex.

St.Ack


On Thu, Apr 2, 2009 at 8:51 AM, Ryan Rawson <ry...@gmail.com> wrote:

> hi all,
>
> it's been a long road, but it's time to start thinking about what will
> conclusively be in 0.20.
>
> I'll let you fight that out a bit... personally I'd be happy with hfile +
> KeyValue.
>
> But, one last thing, what is our migration story going to be?
>
> -ryan
>

Re: thinking about hbase 0.20

Posted by Ryan Rawson <ry...@gmail.com>.
Maybe we should look at testing with KFS - I did some tests, and the IO
seems to be slightly slower, but lost data is lost data.

The append and sync does work in KFS unlike HDFS.

It would make a good stir in the Hadoop community...

On Thu, Apr 2, 2009 at 11:24 PM, stack <st...@duboce.net> wrote:

> On Fri, Apr 3, 2009 at 2:41 AM, Ryan Rawson <ry...@gmail.com> wrote:
>
> >
> > So, what will be in hadoop-0.20 to minimize this kind of horrible data
> > loss?
> >
>
> In 0.20 timeframe, you will have to enable flush (HADOOP-5332) but as Jim
> says, its not going to do much good without HADOOP-4379.   The latter won't
> be in in hadoop 0.20.  We'll have to work to make sure it makes it into
> HADOOP 0.21.  One recent suggestion was to contribute a patch to HADOOP
> that
> enabled appends in TRUNK -- but we should make sure first that all
> objections to append have been put to rest.
>
> Working flush/sync is the most important hbase issue.  Up to this, we've
> not
> been doing a good job staying on top of its progress.
>
> St.Ack
>

Re: thinking about hbase 0.20

Posted by stack <st...@duboce.net>.
On Fri, Apr 3, 2009 at 2:41 AM, Ryan Rawson <ry...@gmail.com> wrote:

>
> So, what will be in hadoop-0.20 to minimize this kind of horrible data
> loss?
>

In 0.20 timeframe, you will have to enable flush (HADOOP-5332) but as Jim
says, its not going to do much good without HADOOP-4379.   The latter won't
be in in hadoop 0.20.  We'll have to work to make sure it makes it into
HADOOP 0.21.  One recent suggestion was to contribute a patch to HADOOP that
enabled appends in TRUNK -- but we should make sure first that all
objections to append have been put to rest.

Working flush/sync is the most important hbase issue.  Up to this, we've not
been doing a good job staying on top of its progress.

St.Ack

Re: thinking about hbase 0.20

Posted by stack <st...@duboce.net>.
On Fri, Apr 3, 2009 at 2:53 AM, Jim Kellerman (POWERSET) <
Jim.Kellerman@microsoft.com> wrote:

> Stack and I cannot contribute to Hadoop....


To be clear, we can test, review and even commit patches to the parent
hadoop -- just not write actual patches.



> Be warned,
> however, that if you haven't ventured into the depths of the namenode
> and datanode, it's *really* complicated.
>

IMO, datanode is not that bad.  There isn't that much code there.  It'd just
require a bit of dogged study.

St.Ack

RE: thinking about hbase 0.20

Posted by "Jim Kellerman (POWERSET)" <Ji...@microsoft.com>.
sync() is not good enough nor is syncFS(). What we need is HADOOP-4379.
However, the current patch does not recover the (HDFS file) lease properly.

Stack and I cannot contribute to Hadoop, but if someone else in hbase-dev
wants to help Dhruba out, I'm sure he'd welcome contributions. Be warned,
however, that if you haven't ventured into the depths of the namenode
and datanode, it's *really* complicated.

---
Jim Kellerman, Powerset (Live Search, Microsoft Corporation)


> -----Original Message-----
> From: Ryan Rawson [mailto:ryanobjc@gmail.com]
> Sent: Thursday, April 02, 2009 5:42 PM
> To: hbase-dev@hadoop.apache.org
> Subject: Re: thinking about hbase 0.20
> 
> I want to talk about sync() in HDFS for a bit...
> 
> I had a cluster crash, OOMEs out the butt, 17/19 machines were dead when I
> got to the scene.
> 
> What I found was in .META. there were 2-3x as many regions as were
> actually
> on disk.  Tons of older entries from parent splits. Looks like a bunch of
> updates and deletes weren't persisted.  And by a bunch, I mean a SHIT TON.
> It was insane.  I had to write HbaseFsck.java as an experiment to recover
> without rm -rf /hbase
> 
> So, what will be in hadoop-0.20 to minimize this kind of horrible data
> loss?
> 
> Is this the 'sync()' call that is on-again-off-again reliable?
> 
> What about append?  Do we really need append?  Syncing an open file to
> persist data is good enough, no?
> 
> -ryan
> 
> On Thu, Apr 2, 2009 at 5:34 PM, Jim Kellerman (POWERSET) <
> Jim.Kellerman@microsoft.com> wrote:
> 
> > > -----Original Message-----
> > > From: Erik Holstad [mailto:erikholstad@gmail.com]
> > > Sent: Thursday, April 02, 2009 5:09 PM
> > > To: hbase-dev@hadoop.apache.org
> > > Subject: Re: thinking about hbase 0.20
> > >
> > > So the way I see it, from our point of view, we can probably get 0.20
> out
> > > the door a week after that meeting, so maybe a week and a half after
> > Stack
> > > gets back.
> >
> > We still have to wait for hadoop-0.20 which has no release candidate
> yet.
> > However pushing tasks out is still a good idea so that we can spend the
> > time between hadoop-0.20 release candidate and hbase-0.20 fixing issues
> > which I'm certain we will find. All in all this should result in a more
> > timely and stable release for hbase-0.20.
> >
> > -Jim
> >

Re: thinking about hbase 0.20

Posted by Ryan Rawson <ry...@gmail.com>.
I want to talk about sync() in HDFS for a bit...

I had a cluster crash, OOMEs out the butt, 17/19 machines were dead when I
got to the scene.

What I found was in .META. there were 2-3x as many regions as were actually
on disk.  Tons of older entries from parent splits. Looks like a bunch of
updates and deletes weren't persisted.  And by a bunch, I mean a SHIT TON.
It was insane.  I had to write HbaseFsck.java as an experiment to recover
without rm -rf /hbase

So, what will be in hadoop-0.20 to minimize this kind of horrible data loss?

Is this the 'sync()' call that is on-again-off-again reliable?

What about append?  Do we really need append?  Syncing an open file to
persist data is good enough, no?

-ryan

On Thu, Apr 2, 2009 at 5:34 PM, Jim Kellerman (POWERSET) <
Jim.Kellerman@microsoft.com> wrote:

> > -----Original Message-----
> > From: Erik Holstad [mailto:erikholstad@gmail.com]
> > Sent: Thursday, April 02, 2009 5:09 PM
> > To: hbase-dev@hadoop.apache.org
> > Subject: Re: thinking about hbase 0.20
> >
> > So the way I see it, from our point of view, we can probably get 0.20 out
> > the door a week after that meeting, so maybe a week and a half after
> Stack
> > gets back.
>
> We still have to wait for hadoop-0.20 which has no release candidate yet.
> However pushing tasks out is still a good idea so that we can spend the
> time between hadoop-0.20 release candidate and hbase-0.20 fixing issues
> which I'm certain we will find. All in all this should result in a more
> timely and stable release for hbase-0.20.
>
> -Jim
>

RE: thinking about hbase 0.20

Posted by "Jim Kellerman (POWERSET)" <Ji...@microsoft.com>.
> -----Original Message-----
> From: Erik Holstad [mailto:erikholstad@gmail.com]
> Sent: Thursday, April 02, 2009 5:09 PM
> To: hbase-dev@hadoop.apache.org
> Subject: Re: thinking about hbase 0.20
> 
> So the way I see it, from our point of view, we can probably get 0.20 out
> the door a week after that meeting, so maybe a week and a half after Stack
> gets back.

We still have to wait for hadoop-0.20 which has no release candidate yet.
However pushing tasks out is still a good idea so that we can spend the
time between hadoop-0.20 release candidate and hbase-0.20 fixing issues
which I'm certain we will find. All in all this should result in a more
timely and stable release for hbase-0.20.

-Jim

Re: thinking about hbase 0.20

Posted by Erik Holstad <er...@gmail.com>.
If splitting up HBASE-1249 in it's smaller parts  and starting with
HBASE-1304 which is the client-server changes I don't think it is going to
be too long before
I have tested the basic setup and have that under control, after that I need
to add some extra caller methods but the infrastructure will stay the same,
so it is not going to take too much time.
Going to HBASE-880 which is a pretty big issue itself can be done pretty
quickly once we have reached an agreement on how it will look, the coding
itself is
as far as I can tell pretty strait forward.

I would really like to see this as a part of 0.20 and we are doing
everything we can to make that happen.
I think that is would be nice to have a meeting so we can discuss and talk
about these big changes, I will gladely make a 30 minute talk or something
to explain what has been done and the reasoning around those changes.
So the way I see it, from our point of view, we can probably get 0.20 out
the door a week after that meeting, so maybe a week and a half after Stack
gets back.

Regards Erik
>
>

Re: thinking about hbase 0.20

Posted by Andrew Purtell <ap...@apache.org>.
For what my employer will be doing some time in the next
few weeks, HFile is really important. ZK integration for
active/passive master failover are also important, along
with HBASE-1302.

I think C API should be pushed to 0.21. The pending big
changes to the HRS<->client interaction continue to be a
blocker. I can finish up HBASE-794 in time for 0.20. I
made some edits to the relevant JIRAs that can be undone
if there is disagreement. 

   - Andy

> From: Ryan Rawson <ry...@gmail.com>
> Subject: thinking about hbase 0.20
> To: hbase-dev@hadoop.apache.org
> Date: Wednesday, April 1, 2009, 11:51 PM
> hi all,
> 
> it's been a long road, but it's time to start
> thinking about what will conclusively be in 0.20.
> 
> I'll let you fight that out a bit... personally I'd
> be happy with hfile + KeyValue.
> 
> But, one last thing, what is our migration story
> going to be?
> 
> -ryan