You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by "Michael G. Noll" <mi...@googlemail.com> on 2011/04/12 13:38:15 UTC

Summarizing instructions to run HBase 0.90.2 on Hadoop 0.20.x, feedback appreciated

Hi all,

as a few other people on this mailing list I am currently working on
getting HBase up and running on Hadoop 0.20.2.  I think I have by now
read most of the relevant past discussions on this topic here, e.g.
St.Ack's thread on creating an append release [6], Mike Spreitzner's
recent attempt [3] at making HBase work on Hadoop 0.20.x that made it
into the HBase docs [2], or the recent discussion in [8] when 0.90.2
was about to be released last week.

St.Ack mentioned that Hadoop 0.22 might be the first release with append
support out of the box [7].  However, on our side we are stuck for the
time being on the production-ready 0.20.x branch, so waiting until
Hadoop 0.22 or rather 0.23 [9] (and HBase 0.92) are eventually released
is not an option. :-/

So in order to help myself and hopefully also other readers of this
mailing list, I try to summarize my steps so far to understand and build
Hadoop 0.20-append for use with HBase 0.90.2, the problems I have run
into, and I'll also list the pending issues and roadblocks that I haven't
solved yet.

- I checked out branch-0.20-append [1] according to the HBase instructions
  at [2] and run a successful build via "ant mvn-install" [5].
- I inspected the code history of 0.20.2 release and branch-0.20-append
  (git show-branch release-0.20.2 branch-0.20-append) and noticed that
  the append branch is based on release 0.20.2.  In other words, there is
  not a single commit in 0.20.2 release that is not also in
  branch-0.20-append. Good!
- FWIW, I compared the Hadoop JAR file shipped with HBase 0.90.1/0.90.2
  (hadoop-core-0.20-append-r1056497.jar) with the one I built from the
  latest version of branch-0.20-append.  I noticed that the JAR file in
  HBase seems to miss the latest commit for HDFS-1554 (SVN rev 1057313
  aka git commit df0d79cc). In git terms, the Hadoop JAR file shipped
  in HBase is based on HEAD^1 of branch-0.20-append.  Is there a reason
  for not including the latest commit?
- I also discovered (like Mike Spreitzner did [3]) that there is a
  BlockChannel.class file in HBase's Hadoop JAR file that seems to come
  "out of nowhere".  I haven't found it or a reference to it anywhere in
  the source code.  I decompiled the class [4], and it appears to be an
  innocent file, maybe used for debugging. A build artifact?

Then I tried two different builds:

1) A first build to replicate and test the Hadoop JAR shipped with HBase
   0.90.{1,2}, using all commit history up to SVN rev 1056491 aka git
   e499be8.  The last commit being "HDFS-1555 ..." from 07-Jan-11.
   In git terms, this is a build based on HEAD^1.
2) A second build to create the current version of the Hadoop append
   branch, using all commit history up to SVN rev 1057313 aka git
   df0d79cc.  The last commit is "HDFS-1554 ..." from 10-Jan-11.
   In git terms, this is a build based on HEAD, i.e. the latest version
   of branch-0.20-append.

Here are my findings:

1) When I run "ant test" for the append branch version apparently used by
   HBase 0.90.{1,2}, I consistently run into a build error in
   TestFileAppend4, logged to
   build/test/TEST-org.apache.hadoop.hdfs.TestFileAppend4.txt.
   Details are available at [10].
2) When I run "ant test" for the latest version of the append branch, I
   get the same error as before. However, I sometimes -- not always -- get
   additional failures/errors for
    * TEST-org.apache.hadoop.hdfs.server.namenode.TestEditLogRace.txt [11]
    * TEST-org.apache.hadoop.hdfs.TestMultiThreadedSync.txt [12]
   both of which look like "general" errors to me.  Maybe a problem of
   the machine I'm running the build and the tests on?

This leads me to two questions:

1. Are the test errors described above a known issue that can be ignored?
   Or did I miss something when building the append branch?
   From what I have read, my build process should have produced an Hadoop
   JAR file that is equivalent to the one shipped with HBase.  So any
   error during my tests should have surfaced for the HBase build, too.

2. Is there a way to test whether my custom build is "correct"?  In other
   words, how can I find out whether the append/syncing works properly
   so that it does not come to a data loss in HBase at some point.
   Unfortunately, I haven't found any instructions to intentionally
   create such a data-loss scenario for verifying whether Hadoop/HBase
   handles it properly.  St.Ack, for instance, only talks about some
   basic tests he did himself [13].
   I know someone already asked this question before without receiving
   a good answer but hey -- there's always hope. :-)


Any feedback or pointers would be greatly appreciated!

I'm happy to experiment and to report back.  Since St.Ack's suggestion
to make a quick, official "append-ready" release of Hadoop for HBase [6]
was not pursued (I do not want to restart a discussion here), at least I
would like to help the community with a set of easy-to-follow instructions
for other people to get HBase and Hadoop 0.20.x up and running.

Best,
Michael


PS: And congratulations for getting 0.90.2 out. Your work is really
appreciated! :-)


[1] http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/
[2] http://hbase.apache.org/book/notsoquick.html#hadoop
[3] http://search-hadoop.com/m/mfUkf2EEiaf
[4] http://pastebin.ubuntu.com/587699/
[5] http://wiki.apache.org/hadoop/GitAndHadoop
[6] http://www.mail-archive.com/general@hadoop.apache.org/msg02543.html
[7] http://www.mail-archive.com/user@hbase.apache.org/msg06772.html
[8] http://www.mail-archive.com/user@hbase.apache.org/msg07060.html
[9] http://www.mail-archive.com/common-dev@hadoop.apache.org/msg02785.html
[10] http://pastebin.ubuntu.com/593073/
[11] http://pastebin.ubuntu.com/593075/
[12] http://pastebin.ubuntu.com/593076/
[13] http://www.mail-archive.com/user@hbase.apache.org/msg07158.html

Re: Summarizing instructions to run HBase 0.90.2 on Hadoop 0.20.x, feedback appreciated

Posted by Stack <st...@duboce.net>.
To tie off this thread, Michael Noll took the info he'd collected here
and put it together with the research he'd done beforehand and wrote
up a sweet blog posting on how to build branch-0.20.append.  All
details are covered and the reader is taken gently from empty
directory to built and deployed cluster.  Recommended. See
http://www.michael-noll.com/blog/2011/04/14/building-an-hadoop-0-20-x-version-for-hbase-0-90-2/

(I'll add a pointer to the posting to our 'Getting Started:
Requirements' section in the website book)

Thanks again Michael,
St.Ack



On Thu, Apr 14, 2011 at 9:13 AM, Stack <st...@duboce.net> wrote:
> On Thu, Apr 14, 2011 at 5:43 AM, Michael G. Noll
> <mi...@googlemail.com> wrote:
>> Hi St.Ack,
>>
>> many thanks for your detailed reply. This clears up a lot of things!
>>
>>
>> On Wed, Apr 13, 2011 at 01:48, Stack <st...@duboce.net> wrote:
>>>
>>> So, HBase 0.90.2 and the tip of branch-0.20-append is recommended.
>>
>> To summarize the RPC version differences:
>> - Hadoop 0.20.2 release uses version 41.
>> - The Hadoop 0.20-append version shipped with HBase 0.90.{1,2} uses 42.
>> - The trunk version of Hadoop 0.20-append uses 43.
>>
>> So I understand that in order to actually install Hadoop 0.20-append for use
>> with HBase 0.90.2, we can simply use an existing 0.20.2 release installation
>> and replace its JAR files with the append JAR files created from the tip of
>> branch-0.20-append.
>>
>> My own answer to this question would be "Yes" but since it is a critical
>> question I still want to ask it.
>>
>
> Yes.
>
>
>
>>
>> Regarding the various build test failures:
>> I read through the links you posted (e.g. your comments in HBASE-3285), and
>> it seems that the build failures for org.apache.hadoop.hdfs.TestFileAppend4
>> do not indicate at the moment whether the build is really erroneous or not.
>> In other words, the (or rather, some) unit tests are currently broken for
>> the tip of branch-0.20-append so we may (have to) ignore this build error
>> because it doesn't really tell us anything at the moment. Right?
>>
>
>
> Yes.
>
>
> We need to fix it.
>
>
>>
>>> > 2) When I run "ant test" for the latest version of the append branch, I
>>> >   get the same error as before. However, I sometimes -- not always --
>>> > get
>>> >   additional failures/errors for
>>> >    * TEST-org.apache.hadoop.hdfs.server.namenode.TestEditLogRace.txt
>>> > [11]
>>> >    * TEST-org.apache.hadoop.hdfs.TestMultiThreadedSync.txt [12]
>>> >   both of which look like "general" errors to me.  Maybe a problem of
>>> >   the machine I'm running the build and the tests on?
>>> >
>>>
>>> This I have not noticed.
>>
>> In my latest build tests, I have seen errors reported by
>> org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling.txt.
>>
>> If you want to, I can perform some more build tests on branch-0.20-append
>> and report back if that helps.
>>
>
>
> That would help though hold for a moment; the tip might be getting an
> update.  Let me go ask (Will report back).
>
>
>>
>
>
>
>> Would I have to run these unit tests as part of an HBase build process, or
>> is there a way to run them separately?
>>
>
> They are hbase unit tests so they are run as part of the hbase test
> suite (They are under the wal package... have to do w/ HLog).
>
>
>> My understanding is that I can use HBase 0.90.2 release as is as soon as I
>> have a Hadoop 0.20-append build ready. In other words, when I replace the
>> JAR files of Hadoop 0.20.2 release with the JAR files built from
>> branch-0.20-append (for all machines in a cluster), then I can use the
>> tarball of HBase 0.90.2 and do not need to build HBase myself.
>>
>
>
> That is right.
>
> The hadoop jar at hbase-0.90.2/lib/hadoop-core*.jar must match what
> you have in your cluster before you start up hbase.
>
>
>
>> But I guess what you imply is that I would have to re-run HBase unit tests
>> myself if I want to test them with the "trunk" branch-0.20-append JAR files
>> (because though your tests passed before release, they were against the
>> HEAD^1 version of branch-0.20-append).
>>
>
> Correct.
>
> I will do this too and report back.
>
>
>> At least some consolation. :-)
>> But: ...on HEAD or HEAD^1 of branch-0.20-append for Hadoop?
>>
>
> On CDH3b2 with the patches from the tip of branch-0.20-append applied
> (I attached this diff to the issue that added use of the new lease
> semantics).  What we run is also published up in github.
>
>
>
>> FYI, I am currently preparing a step-by-step summary of how you go about
>> building Hadoop 0.20-append for HBase 0.90.2 based on the feedback in this
>> thread. I can also post it back to the mailing list, and I'm also more than
>> happy to help extending the current HBase docs in one way or the other if
>> you are interested in that.
>>
>
> That would be excellent.
>
> I think a post to the mailing list of text we could apply to the book
> would be the best.  Mailing list memory seems ephemeral.  The book
> lasts a little longer.
>
> Thank you Michael for digging in here.
> St.Ack
>

Re: Summarizing instructions to run HBase 0.90.2 on Hadoop 0.20.x, feedback appreciated

Posted by Stack <st...@duboce.net>.
On Thu, Apr 14, 2011 at 5:43 AM, Michael G. Noll
<mi...@googlemail.com> wrote:
> Hi St.Ack,
>
> many thanks for your detailed reply. This clears up a lot of things!
>
>
> On Wed, Apr 13, 2011 at 01:48, Stack <st...@duboce.net> wrote:
>>
>> So, HBase 0.90.2 and the tip of branch-0.20-append is recommended.
>
> To summarize the RPC version differences:
> - Hadoop 0.20.2 release uses version 41.
> - The Hadoop 0.20-append version shipped with HBase 0.90.{1,2} uses 42.
> - The trunk version of Hadoop 0.20-append uses 43.
>
> So I understand that in order to actually install Hadoop 0.20-append for use
> with HBase 0.90.2, we can simply use an existing 0.20.2 release installation
> and replace its JAR files with the append JAR files created from the tip of
> branch-0.20-append.
>
> My own answer to this question would be "Yes" but since it is a critical
> question I still want to ask it.
>

Yes.



>
> Regarding the various build test failures:
> I read through the links you posted (e.g. your comments in HBASE-3285), and
> it seems that the build failures for org.apache.hadoop.hdfs.TestFileAppend4
> do not indicate at the moment whether the build is really erroneous or not.
> In other words, the (or rather, some) unit tests are currently broken for
> the tip of branch-0.20-append so we may (have to) ignore this build error
> because it doesn't really tell us anything at the moment. Right?
>


Yes.


We need to fix it.


>
>> > 2) When I run "ant test" for the latest version of the append branch, I
>> >   get the same error as before. However, I sometimes -- not always --
>> > get
>> >   additional failures/errors for
>> >    * TEST-org.apache.hadoop.hdfs.server.namenode.TestEditLogRace.txt
>> > [11]
>> >    * TEST-org.apache.hadoop.hdfs.TestMultiThreadedSync.txt [12]
>> >   both of which look like "general" errors to me.  Maybe a problem of
>> >   the machine I'm running the build and the tests on?
>> >
>>
>> This I have not noticed.
>
> In my latest build tests, I have seen errors reported by
> org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling.txt.
>
> If you want to, I can perform some more build tests on branch-0.20-append
> and report back if that helps.
>


That would help though hold for a moment; the tip might be getting an
update.  Let me go ask (Will report back).


>



> Would I have to run these unit tests as part of an HBase build process, or
> is there a way to run them separately?
>

They are hbase unit tests so they are run as part of the hbase test
suite (They are under the wal package... have to do w/ HLog).


> My understanding is that I can use HBase 0.90.2 release as is as soon as I
> have a Hadoop 0.20-append build ready. In other words, when I replace the
> JAR files of Hadoop 0.20.2 release with the JAR files built from
> branch-0.20-append (for all machines in a cluster), then I can use the
> tarball of HBase 0.90.2 and do not need to build HBase myself.
>


That is right.

The hadoop jar at hbase-0.90.2/lib/hadoop-core*.jar must match what
you have in your cluster before you start up hbase.



> But I guess what you imply is that I would have to re-run HBase unit tests
> myself if I want to test them with the "trunk" branch-0.20-append JAR files
> (because though your tests passed before release, they were against the
> HEAD^1 version of branch-0.20-append).
>

Correct.

I will do this too and report back.


> At least some consolation. :-)
> But: ...on HEAD or HEAD^1 of branch-0.20-append for Hadoop?
>

On CDH3b2 with the patches from the tip of branch-0.20-append applied
(I attached this diff to the issue that added use of the new lease
semantics).  What we run is also published up in github.



> FYI, I am currently preparing a step-by-step summary of how you go about
> building Hadoop 0.20-append for HBase 0.90.2 based on the feedback in this
> thread. I can also post it back to the mailing list, and I'm also more than
> happy to help extending the current HBase docs in one way or the other if
> you are interested in that.
>

That would be excellent.

I think a post to the mailing list of text we could apply to the book
would be the best.  Mailing list memory seems ephemeral.  The book
lasts a little longer.

Thank you Michael for digging in here.
St.Ack

Re: Summarizing instructions to run HBase 0.90.2 on Hadoop 0.20.x, feedback appreciated

Posted by "Michael G. Noll" <mi...@googlemail.com>.
Hi St.Ack,

many thanks for your detailed reply. This clears up a lot of things!


On Wed, Apr 13, 2011 at 01:48, Stack <st...@duboce.net> wrote:

> So, HBase 0.90.2 and the tip of branch-0.20-append is recommended.
>

To summarize the RPC version differences:
- Hadoop 0.20.2 release uses version 41.
- The Hadoop 0.20-append version shipped with HBase 0.90.{1,2} uses 42.
- The trunk version of Hadoop 0.20-append uses 43.

So I understand that in order to actually install Hadoop 0.20-append for use
with HBase 0.90.2, we can simply use an existing 0.20.2 release installation
and replace its JAR files with the append JAR files created from the tip of
branch-0.20-append.

My own answer to this question would be "Yes" but since it is a critical
question I still want to ask it.


Regarding the various build test failures:
I read through the links you posted (e.g. your comments in HBASE-3285), and
it seems that the build failures for org.apache.hadoop.hdfs.TestFileAppend4
do not indicate at the moment whether the build is really erroneous or not.
In other words, the (or rather, some) unit tests are currently broken for
the tip of branch-0.20-append so we may (have to) ignore this build error
because it doesn't really tell us anything at the moment. Right?


> 2) When I run "ant test" for the latest version of the append branch, I
> >   get the same error as before. However, I sometimes -- not always -- get
> >   additional failures/errors for
> >    * TEST-org.apache.hadoop.hdfs.server.namenode.TestEditLogRace.txt [11]
> >    * TEST-org.apache.hadoop.hdfs.TestMultiThreadedSync.txt [12]
> >   both of which look like "general" errors to me.  Maybe a problem of
> >   the machine I'm running the build and the tests on?
> >
>
> This I have not noticed.
>

In my latest build tests, I have seen errors reported by
org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling.txt.

If you want to, I can perform some more build tests on branch-0.20-append
and report back if that helps.


> 2. Is there a way to test whether my custom build is "correct"?  In other
> >   words, how can I find out whether the append/syncing works properly
> >   so that it does not come to a data loss in HBase at some point.
> >   Unfortunately, I haven't found any instructions to intentionally
> >   create such a data-loss scenario for verifying whether Hadoop/HBase
> >   handles it properly.  St.Ack, for instance, only talks about some
> >   basic tests he did himself [13].
>
>
> Yes.
>
> There are hbase unit tests that will check for lost data.  These
> passed before we cut the release.
>

Ok.

Would I have to run these unit tests as part of an HBase build process, or
is there a way to run them separately?

My understanding is that I can use HBase 0.90.2 release as is as soon as I
have a Hadoop 0.20-append build ready. In other words, when I replace the
JAR files of Hadoop 0.20.2 release with the JAR files built from
branch-0.20-append (for all machines in a cluster), then I can use the
tarball of HBase 0.90.2 and do not need to build HBase myself.

But I guess what you imply is that I would have to re-run HBase unit tests
myself if I want to test them with the "trunk" branch-0.20-append JAR files
(because though your tests passed before release, they were against the
HEAD^1 version of branch-0.20-append).




> Its probably little consolation to you but we've been running 0.90.1,
> a 0.90.1 that had HBASE-3285 applied (and a CDH3b2 with 1554 et al.
> applied) with a good while in production here where I work on multiple
> clusters.
>

At least some consolation. :-)
But: ...on HEAD or HEAD^1 of branch-0.20-append for Hadoop?


FYI, I am currently preparing a step-by-step summary of how you go about
building Hadoop 0.20-append for HBase 0.90.2 based on the feedback in this
thread. I can also post it back to the mailing list, and I'm also more than
happy to help extending the current HBase docs in one way or the other if
you are interested in that.

Many thanks again for your help, it's really appreciated!

Best,
Michael

Re: Summarizing instructions to run HBase 0.90.2 on Hadoop 0.20.x, feedback appreciated

Posted by Stack <st...@duboce.net>.
See in the below.

On Tue, Apr 12, 2011 at 4:38 AM, Michael G. Noll
<mi...@googlemail.com> wrote:
> So in order to help myself and hopefully also other readers of this
> mailing list, I try to summarize my steps so far to understand and build
> Hadoop 0.20-append for use with HBase 0.90.2, the problems I have run
> into, and I'll also list the pending issues and roadblocks that I haven't
> solved yet.
>

Thanks for putting together this list.

> - FWIW, I compared the Hadoop JAR file shipped with HBase 0.90.1/0.90.2
>  (hadoop-core-0.20-append-r1056497.jar) with the one I built from the
>  latest version of branch-0.20-append.  I noticed that the JAR file in
>  HBase seems to miss the latest commit for HDFS-1554 (SVN rev 1057313
>  aka git commit df0d79cc). In git terms, the Hadoop JAR file shipped
>  in HBase is based on HEAD^1 of branch-0.20-append.  Is there a reason
>  for not including the latest commit?

That is right.

The last commit went in after we'd released 0.90.  We could not pull
it into hbase because the last change on the tip of the hadoop branch
-- hdfs-1554 -- changed the RPC version.  If we'd pulled it in, folks
upgrading from 0.90.0 to 0.90.1 would have been surprised when their
HBase could not connect to their hadoop cluster (I actually committed
the new hadoop jar and was convinced we should back it out; see
HBASE-3520).

That said, we've found that these last few commits on the
branch-0.20-append by Hairong are pretty critical.  They provide a
short-circuit to the Master to allow it grab the lease on WAL files so
it can split them on regionserver crash ("New semantics for
recoverLease"); we've found that the master on occasion can fail
assuming the WAL file doing the open-for-append, what we did before
"New semantics..".

HBase 0.90.2 can make use of this new API.

So, HBase 0.90.2 and the tip of branch-0.20-append is recommended.

(CDH betas did not have HDFS-1554 either.  The release does and the
included HBase in CDH makes use of the new semantics around lease
recovery).

> - I also discovered (like Mike Spreitzner did [3]) that there is a
>  BlockChannel.class file in HBase's Hadoop JAR file that seems to come
>  "out of nowhere".  I haven't found it or a reference to it anywhere in
>  the source code.  I decompiled the class [4], and it appears to be an
>  innocent file, maybe used for debugging. A build artifact?
>

Whoops.  Thanks for spotting that.  Build artifact I'd say.  I was
probably trying a patch and didn't clean up properly.

> Then I tried two different builds:
>
> 1) A first build to replicate and test the Hadoop JAR shipped with HBase
>   0.90.{1,2}, using all commit history up to SVN rev 1056491 aka git
>   e499be8.  The last commit being "HDFS-1555 ..." from 07-Jan-11.
>   In git terms, this is a build based on HEAD^1.
> 2) A second build to create the current version of the Hadoop append
>   branch, using all commit history up to SVN rev 1057313 aka git
>   df0d79cc.  The last commit is "HDFS-1554 ..." from 10-Jan-11.
>   In git terms, this is a build based on HEAD, i.e. the latest version
>   of branch-0.20-append.
>
> Here are my findings:
>
> 1) When I run "ant test" for the append branch version apparently used by
>   HBase 0.90.{1,2}, I consistently run into a build error in
>   TestFileAppend4, logged to
>   build/test/TEST-org.apache.hadoop.hdfs.TestFileAppend4.txt.
>   Details are available at [10].

Yes.  I've since noticed this.  I started to dig in a while back but
got distracted.  I think the test started failing with this commit:

commit 62441fbd516ec9132619d448a1051554d29d2dba
Author: Dhruba Borthakur <dh...@apache.org>
Date:   Thu Jun 17 01:52:50 2010 +0000

    HDFS-1210. DFSClient should log exception when block recovery fails.
    (Todd Lipcon via dhruba)



> 2) When I run "ant test" for the latest version of the append branch, I
>   get the same error as before. However, I sometimes -- not always -- get
>   additional failures/errors for
>    * TEST-org.apache.hadoop.hdfs.server.namenode.TestEditLogRace.txt [11]
>    * TEST-org.apache.hadoop.hdfs.TestMultiThreadedSync.txt [12]
>   both of which look like "general" errors to me.  Maybe a problem of
>   the machine I'm running the build and the tests on?
>

This I have not noticed.


> This leads me to two questions:
>
> 1. Are the test errors described above a known issue that can be ignored?
>   Or did I miss something when building the append branch?
>   From what I have read, my build process should have produced an Hadoop
>   JAR file that is equivalent to the one shipped with HBase.  So any
>   error during my tests should have surfaced for the HBase build, too.
>

See above.


> 2. Is there a way to test whether my custom build is "correct"?  In other
>   words, how can I find out whether the append/syncing works properly
>   so that it does not come to a data loss in HBase at some point.
>   Unfortunately, I haven't found any instructions to intentionally
>   create such a data-loss scenario for verifying whether Hadoop/HBase
>   handles it properly.  St.Ack, for instance, only talks about some
>   basic tests he did himself [13].

Yes.

There are hbase unit tests that will check for lost data.  These
passed before we cut the release.

Its probably little consolation to you but we've been running 0.90.1,
a 0.90.1 that had HBASE-3285 applied (and a CDH3b2 with 1554 et al.
applied) with a good while in production here where I work on multiple
clusters.


>   I know someone already asked this question before without receiving
>   a good answer but hey -- there's always hope. :-)
>
>
> Any feedback or pointers would be greatly appreciated!
>
> I'm happy to experiment and to report back.  Since St.Ack's suggestion
> to make a quick, official "append-ready" release of Hadoop for HBase [6]
> was not pursued (I do not want to restart a discussion here), at least I
> would like to help the community with a set of easy-to-follow instructions
> for other people to get HBase and Hadoop 0.20.x up and running.
>

You are a good man Michael.  Sounds like I need to update our Manual
at least to include the info above.

Thanks for doing the digging and taking the time to craft the note above,
St.Ack


> Best,
> Michael
>
>
> PS: And congratulations for getting 0.90.2 out. Your work is really
> appreciated! :-)
>
>
> [1] http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/
> [2] http://hbase.apache.org/book/notsoquick.html#hadoop
> [3] http://search-hadoop.com/m/mfUkf2EEiaf
> [4] http://pastebin.ubuntu.com/587699/
> [5] http://wiki.apache.org/hadoop/GitAndHadoop
> [6] http://www.mail-archive.com/general@hadoop.apache.org/msg02543.html
> [7] http://www.mail-archive.com/user@hbase.apache.org/msg06772.html
> [8] http://www.mail-archive.com/user@hbase.apache.org/msg07060.html
> [9] http://www.mail-archive.com/common-dev@hadoop.apache.org/msg02785.html
> [10] http://pastebin.ubuntu.com/593073/
> [11] http://pastebin.ubuntu.com/593075/
> [12] http://pastebin.ubuntu.com/593076/
> [13] http://www.mail-archive.com/user@hbase.apache.org/msg07158.html
>