You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by "George P. Stathis" <gs...@traackr.com> on 2011/03/24 20:15:08 UTC

Recommendation for migrating region server implementations

Hey folks,

What would be the best approach for migrating away from a given region
server implementation back to the default out-of-the box one? My goal here
is to upgrade our cluster to 0.90 and migrate away from IndexedRegionServer
back to the default HRegionServer.

The only options that I know of at the moment are:

   - Export/Import - pros: straightforward, cons: tedious, takes a long time
   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the pros
   and cons here because I've never used it. It seems to require that tables
   are moved to a different setup. Can it be used in place?

If folks know of a different way to do this, please let me know.

Thank you in advance for your time.

-GS

Re: Recommendation for migrating region server implementations

Posted by "George P. Stathis" <gs...@traackr.com>.
Thanks Gary. Good to know.

On Thu, Mar 24, 2011 at 4:46 PM, Gary Helmling <gh...@gmail.com> wrote:

> Hi George,
>
> Looking at the IndexedTableDescriptor code on github
>
>
> https://github.com/hbase-trx/hbase-transactional-tableindexed/blob/master/src/main/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableDescriptor.java
> )
>
> it seems to just store the serialized index definition to the
> HTableDescriptor's value map.  If you can live with the mucked up shell
> output for the table description (caused by doing Bytes.toString() on the
> index definition), then you should be good to go.
>
> --gh
>
>
> On Thu, Mar 24, 2011 at 1:33 PM, Stack <st...@duboce.net> wrote:
>
> > I'm not sure.  If it came up -- 'seems to work' -- then it looks like
> > we just ignore the extra stuff (though, that seems a little odd... I'd
> > expect the deserializing of these 'exotic's to throw an exception).
> > Test more I'd say.  The shell you are using below is for sure from an
> > untarnished hbase -- there is no indexedhbase in the CLASSPATH?
> >
> > St.Ack
> >
> > On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <gstathis@traackr.com
> >
> > wrote:
> > > Ah, it seems to work, yes. I was thinking all along that it didn't
> > because I
> > > had setup a simple unit test that kept throwing this:
> > > java.io.IOException: Unknown protocol to name node:
> > > org.apache.hadoop.hbase.ipc.IndexedRegionInterface
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
> > > I had upgraded locally on top of my old 0.89 hbase.root and it seemed
> to
> > > work OK at first. Startup was clean and IRB shell was working. That
> unit
> > > test kept throwing me off though. Then, I remembered that we have a
> > custom
> > > hbase-site.xml for our maven unit tests that was still referencing the
> > > old IndexedRegionInterface. Removed that and problem solved.
> > > So, does this mean that we can use the existing table definitions
> as-is?
> > > They still contain old index definitions. E.g.
> > > hbase(main):022:0> describe 'monitors'
> > > DESCRIPTION
> > >                                                        ENABLED
> > >
> > >  {NAME => 'monitors', INDEXES =>
> > >
> 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
> > > true
> > >
> >
>  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
> > >
> > >
> >
>  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
> > > FAMILIES => [{NAME => ...
> > > The INDEXES dictionary is still there. Could it create issues?
> > > -GS
> > > On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
> > >>
> > >> Do you have disk space to spare?  I'd think that all that is different
> > >> about indexed hbase is the WAL format.  If you had an hbase.rootdir
> > >> that was the product of a clean shutdown with no WALs to process, I'd
> > >> think you could just 0.90.x on top of it.  If you had the disk space
> > >> you could give it a go?
> > >> St.Ack
> > >>
> > >> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
> > >> <gs...@traackr.com> wrote:
> > >> > Hey folks,
> > >> >
> > >> > What would be the best approach for migrating away from a given
> region
> > >> > server implementation back to the default out-of-the box one? My
> goal
> > >> > here
> > >> > is to upgrade our cluster to 0.90 and migrate away from
> > >> > IndexedRegionServer
> > >> > back to the default HRegionServer.
> > >> >
> > >> > The only options that I know of at the moment are:
> > >> >
> > >> >   - Export/Import - pros: straightforward, cons: tedious, takes a
> long
> > >> > time
> > >> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the
> > >> > pros
> > >> >   and cons here because I've never used it. It seems to require that
> > >> > tables
> > >> >   are moved to a different setup. Can it be used in place?
> > >> >
> > >> > If folks know of a different way to do this, please let me know.
> > >> >
> > >> > Thank you in advance for your time.
> > >> >
> > >> > -GS
> > >> >
> > >
> > >
> >
>

Re: Recommendation for migrating region server implementations

Posted by Gary Helmling <gh...@gmail.com>.
Hi George,

Looking at the IndexedTableDescriptor code on github

https://github.com/hbase-trx/hbase-transactional-tableindexed/blob/master/src/main/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableDescriptor.java
)

it seems to just store the serialized index definition to the
HTableDescriptor's value map.  If you can live with the mucked up shell
output for the table description (caused by doing Bytes.toString() on the
index definition), then you should be good to go.

--gh


On Thu, Mar 24, 2011 at 1:33 PM, Stack <st...@duboce.net> wrote:

> I'm not sure.  If it came up -- 'seems to work' -- then it looks like
> we just ignore the extra stuff (though, that seems a little odd... I'd
> expect the deserializing of these 'exotic's to throw an exception).
> Test more I'd say.  The shell you are using below is for sure from an
> untarnished hbase -- there is no indexedhbase in the CLASSPATH?
>
> St.Ack
>
> On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <gs...@traackr.com>
> wrote:
> > Ah, it seems to work, yes. I was thinking all along that it didn't
> because I
> > had setup a simple unit test that kept throwing this:
> > java.io.IOException: Unknown protocol to name node:
> > org.apache.hadoop.hbase.ipc.IndexedRegionInterface
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
> > I had upgraded locally on top of my old 0.89 hbase.root and it seemed to
> > work OK at first. Startup was clean and IRB shell was working. That unit
> > test kept throwing me off though. Then, I remembered that we have a
> custom
> > hbase-site.xml for our maven unit tests that was still referencing the
> > old IndexedRegionInterface. Removed that and problem solved.
> > So, does this mean that we can use the existing table definitions as-is?
> > They still contain old index definitions. E.g.
> > hbase(main):022:0> describe 'monitors'
> > DESCRIPTION
> >                                                        ENABLED
> >
> >  {NAME => 'monitors', INDEXES =>
> > 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
> > true
> >
>  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
> >
> >
>  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
> > FAMILIES => [{NAME => ...
> > The INDEXES dictionary is still there. Could it create issues?
> > -GS
> > On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
> >>
> >> Do you have disk space to spare?  I'd think that all that is different
> >> about indexed hbase is the WAL format.  If you had an hbase.rootdir
> >> that was the product of a clean shutdown with no WALs to process, I'd
> >> think you could just 0.90.x on top of it.  If you had the disk space
> >> you could give it a go?
> >> St.Ack
> >>
> >> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
> >> <gs...@traackr.com> wrote:
> >> > Hey folks,
> >> >
> >> > What would be the best approach for migrating away from a given region
> >> > server implementation back to the default out-of-the box one? My goal
> >> > here
> >> > is to upgrade our cluster to 0.90 and migrate away from
> >> > IndexedRegionServer
> >> > back to the default HRegionServer.
> >> >
> >> > The only options that I know of at the moment are:
> >> >
> >> >   - Export/Import - pros: straightforward, cons: tedious, takes a long
> >> > time
> >> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the
> >> > pros
> >> >   and cons here because I've never used it. It seems to require that
> >> > tables
> >> >   are moved to a different setup. Can it be used in place?
> >> >
> >> > If folks know of a different way to do this, please let me know.
> >> >
> >> > Thank you in advance for your time.
> >> >
> >> > -GS
> >> >
> >
> >
>

Re: Recommendation for migrating region server implementations

Posted by "George P. Stathis" <gs...@traackr.com>.
Thanks stack. Just in case, I'll run the following to keep things nice and
clean:

HBaseAdmin admin = new HBaseAdmin(config);
admin.disableTable("old_indexed_table");
HTableDescriptor tableDesc =
admin.getTableDescriptor(Bytes.toBytes("old_indexed_table"));
tableDesc.remove(Bytes.toBytes("INDEXES"));
admin.modifyTable(Bytes.toBytes("old_indexed_table"), tableDesc);
admin.enableTable("dataTable");

-GS

On Thu, Mar 24, 2011 at 4:59 PM, Stack <st...@duboce.net> wrote:

> George:
>
> Gary's digging would explain why things would work on an untarnished
> hbase (Thanks Gary).  HTD and HCD both have buckets that you can dump
> any schema key/value into.    Thats what ITHBase used it looks like.
>
> St.Ack
>
> On Thu, Mar 24, 2011 at 1:52 PM, George P. Stathis <gs...@traackr.com>
> wrote:
> > CLASSPATH is pristine:
> > George-Stathiss-MacBook-Pro logs gstathis$ hbase classpath
> > /opt/servers/hbase-current/conf
> >
> /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/lib/tools.jar
> > /opt/servers/hbase-current
> > /opt/servers/hbase-current/hbase-0.90.1-CDH3B4-tests.jar
> > /opt/servers/hbase-current/hbase-0.90.1-CDH3B4.jar
> > /opt/servers/hbase-current/lib/avro-1.3.3.jar
> > /opt/servers/hbase-current/lib/commons-cli-1.2.jar
> > /opt/servers/hbase-current/lib/commons-codec-1.4.jar
> > /opt/servers/hbase-current/lib/commons-el-1.0.jar
> > /opt/servers/hbase-current/lib/commons-httpclient-3.1.jar
> > /opt/servers/hbase-current/lib/commons-lang-2.5.jar
> > /opt/servers/hbase-current/lib/commons-logging-1.1.1.jar
> > /opt/servers/hbase-current/lib/guava-r06.jar
> > /opt/servers/hbase-current/lib/hadoop-core-0.20.2-CDH3B4.jar
> > /opt/servers/hbase-current/lib/hadoop-gpl-compression-0.2.0-dev.jar
> > /opt/servers/hbase-current/lib/jasper-compiler-5.5.23.jar
> > /opt/servers/hbase-current/lib/jasper-runtime-5.5.23.jar
> > /opt/servers/hbase-current/lib/jaxb-api-2.1.jar
> > /opt/servers/hbase-current/lib/jersey-core-1.4.jar
> > /opt/servers/hbase-current/lib/jersey-json-1.4.jar
> > /opt/servers/hbase-current/lib/jersey-server-1.4.jar
> > /opt/servers/hbase-current/lib/jetty-6.1.26.jar
> > /opt/servers/hbase-current/lib/jetty-util-6.1.26.jar
> > /opt/servers/hbase-current/lib/jruby-complete-1.0.3.jar
> > /opt/servers/hbase-current/lib/jsp-2.1-6.1.14.jar
> > /opt/servers/hbase-current/lib/jsp-api-2.1-6.1.14.jar
> > /opt/servers/hbase-current/lib/jsr311-api-1.1.1.jar
> > /opt/servers/hbase-current/lib/log4j-1.2.16.jar
> > /opt/servers/hbase-current/lib/protobuf-java-2.3.0.jar
> > /opt/servers/hbase-current/lib/servlet-api-2.5-6.1.14.jar
> > /opt/servers/hbase-current/lib/slf4j-api-1.5.8.jar
> > /opt/servers/hbase-current/lib/slf4j-log4j12-1.5.8.jar
> > /opt/servers/hbase-current/lib/stax-api-1.0.1.jar
> > /opt/servers/hbase-current/lib/thrift-0.2.0.jar
> > /opt/servers/hbase-current/lib/zookeeper-3.3.2-CDH3B4.jar
> > /opt/servers/hadoop-current/conf
> > I'm just getting started with testing. I've only scratched the surface.
> > Also, I could always use IndexedTableDescriptor.removeIndex() to remove
> the
> > 'exotics'.
> > -GS
> > On Thu, Mar 24, 2011 at 4:33 PM, Stack <st...@duboce.net> wrote:
> >>
> >> I'm not sure.  If it came up -- 'seems to work' -- then it looks like
> >> we just ignore the extra stuff (though, that seems a little odd... I'd
> >> expect the deserializing of these 'exotic's to throw an exception).
> >> Test more I'd say.  The shell you are using below is for sure from an
> >> untarnished hbase -- there is no indexedhbase in the CLASSPATH?
> >>
> >> St.Ack
> >>
> >> On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <
> gstathis@traackr.com>
> >> wrote:
> >> > Ah, it seems to work, yes. I was thinking all along that it didn't
> >> > because I
> >> > had setup a simple unit test that kept throwing this:
> >> > java.io.IOException: Unknown protocol to name node:
> >> > org.apache.hadoop.hbase.ipc.IndexedRegionInterface
> >> > at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
> >> > I had upgraded locally on top of my old 0.89 hbase.root and it seemed
> to
> >> > work OK at first. Startup was clean and IRB shell was working. That
> unit
> >> > test kept throwing me off though. Then, I remembered that we have a
> >> > custom
> >> > hbase-site.xml for our maven unit tests that was still referencing the
> >> > old IndexedRegionInterface. Removed that and problem solved.
> >> > So, does this mean that we can use the existing table definitions
> as-is?
> >> > They still contain old index definitions. E.g.
> >> > hbase(main):022:0> describe 'monitors'
> >> > DESCRIPTION
> >> >                                                        ENABLED
> >> >
> >> >  {NAME => 'monitors', INDEXES =>
> >> >
> >> >
> 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
> >> > true
> >> >
> >> >
>  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
> >> >
> >> >
> >> >
>  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
> >> > FAMILIES => [{NAME => ...
> >> > The INDEXES dictionary is still there. Could it create issues?
> >> > -GS
> >> > On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
> >> >>
> >> >> Do you have disk space to spare?  I'd think that all that is
> different
> >> >> about indexed hbase is the WAL format.  If you had an hbase.rootdir
> >> >> that was the product of a clean shutdown with no WALs to process, I'd
> >> >> think you could just 0.90.x on top of it.  If you had the disk space
> >> >> you could give it a go?
> >> >> St.Ack
> >> >>
> >> >> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
> >> >> <gs...@traackr.com> wrote:
> >> >> > Hey folks,
> >> >> >
> >> >> > What would be the best approach for migrating away from a given
> >> >> > region
> >> >> > server implementation back to the default out-of-the box one? My
> goal
> >> >> > here
> >> >> > is to upgrade our cluster to 0.90 and migrate away from
> >> >> > IndexedRegionServer
> >> >> > back to the default HRegionServer.
> >> >> >
> >> >> > The only options that I know of at the moment are:
> >> >> >
> >> >> >   - Export/Import - pros: straightforward, cons: tedious, takes a
> >> >> > long
> >> >> > time
> >> >> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about
> the
> >> >> > pros
> >> >> >   and cons here because I've never used it. It seems to require
> that
> >> >> > tables
> >> >> >   are moved to a different setup. Can it be used in place?
> >> >> >
> >> >> > If folks know of a different way to do this, please let me know.
> >> >> >
> >> >> > Thank you in advance for your time.
> >> >> >
> >> >> > -GS
> >> >> >
> >> >
> >> >
> >
> >
>

Re: Recommendation for migrating region server implementations

Posted by Stack <st...@duboce.net>.
George:

Gary's digging would explain why things would work on an untarnished
hbase (Thanks Gary).  HTD and HCD both have buckets that you can dump
any schema key/value into.    Thats what ITHBase used it looks like.

St.Ack

On Thu, Mar 24, 2011 at 1:52 PM, George P. Stathis <gs...@traackr.com> wrote:
> CLASSPATH is pristine:
> George-Stathiss-MacBook-Pro logs gstathis$ hbase classpath
> /opt/servers/hbase-current/conf
> /System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/lib/tools.jar
> /opt/servers/hbase-current
> /opt/servers/hbase-current/hbase-0.90.1-CDH3B4-tests.jar
> /opt/servers/hbase-current/hbase-0.90.1-CDH3B4.jar
> /opt/servers/hbase-current/lib/avro-1.3.3.jar
> /opt/servers/hbase-current/lib/commons-cli-1.2.jar
> /opt/servers/hbase-current/lib/commons-codec-1.4.jar
> /opt/servers/hbase-current/lib/commons-el-1.0.jar
> /opt/servers/hbase-current/lib/commons-httpclient-3.1.jar
> /opt/servers/hbase-current/lib/commons-lang-2.5.jar
> /opt/servers/hbase-current/lib/commons-logging-1.1.1.jar
> /opt/servers/hbase-current/lib/guava-r06.jar
> /opt/servers/hbase-current/lib/hadoop-core-0.20.2-CDH3B4.jar
> /opt/servers/hbase-current/lib/hadoop-gpl-compression-0.2.0-dev.jar
> /opt/servers/hbase-current/lib/jasper-compiler-5.5.23.jar
> /opt/servers/hbase-current/lib/jasper-runtime-5.5.23.jar
> /opt/servers/hbase-current/lib/jaxb-api-2.1.jar
> /opt/servers/hbase-current/lib/jersey-core-1.4.jar
> /opt/servers/hbase-current/lib/jersey-json-1.4.jar
> /opt/servers/hbase-current/lib/jersey-server-1.4.jar
> /opt/servers/hbase-current/lib/jetty-6.1.26.jar
> /opt/servers/hbase-current/lib/jetty-util-6.1.26.jar
> /opt/servers/hbase-current/lib/jruby-complete-1.0.3.jar
> /opt/servers/hbase-current/lib/jsp-2.1-6.1.14.jar
> /opt/servers/hbase-current/lib/jsp-api-2.1-6.1.14.jar
> /opt/servers/hbase-current/lib/jsr311-api-1.1.1.jar
> /opt/servers/hbase-current/lib/log4j-1.2.16.jar
> /opt/servers/hbase-current/lib/protobuf-java-2.3.0.jar
> /opt/servers/hbase-current/lib/servlet-api-2.5-6.1.14.jar
> /opt/servers/hbase-current/lib/slf4j-api-1.5.8.jar
> /opt/servers/hbase-current/lib/slf4j-log4j12-1.5.8.jar
> /opt/servers/hbase-current/lib/stax-api-1.0.1.jar
> /opt/servers/hbase-current/lib/thrift-0.2.0.jar
> /opt/servers/hbase-current/lib/zookeeper-3.3.2-CDH3B4.jar
> /opt/servers/hadoop-current/conf
> I'm just getting started with testing. I've only scratched the surface.
> Also, I could always use IndexedTableDescriptor.removeIndex() to remove the
> 'exotics'.
> -GS
> On Thu, Mar 24, 2011 at 4:33 PM, Stack <st...@duboce.net> wrote:
>>
>> I'm not sure.  If it came up -- 'seems to work' -- then it looks like
>> we just ignore the extra stuff (though, that seems a little odd... I'd
>> expect the deserializing of these 'exotic's to throw an exception).
>> Test more I'd say.  The shell you are using below is for sure from an
>> untarnished hbase -- there is no indexedhbase in the CLASSPATH?
>>
>> St.Ack
>>
>> On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <gs...@traackr.com>
>> wrote:
>> > Ah, it seems to work, yes. I was thinking all along that it didn't
>> > because I
>> > had setup a simple unit test that kept throwing this:
>> > java.io.IOException: Unknown protocol to name node:
>> > org.apache.hadoop.hbase.ipc.IndexedRegionInterface
>> > at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
>> > I had upgraded locally on top of my old 0.89 hbase.root and it seemed to
>> > work OK at first. Startup was clean and IRB shell was working. That unit
>> > test kept throwing me off though. Then, I remembered that we have a
>> > custom
>> > hbase-site.xml for our maven unit tests that was still referencing the
>> > old IndexedRegionInterface. Removed that and problem solved.
>> > So, does this mean that we can use the existing table definitions as-is?
>> > They still contain old index definitions. E.g.
>> > hbase(main):022:0> describe 'monitors'
>> > DESCRIPTION
>> >                                                        ENABLED
>> >
>> >  {NAME => 'monitors', INDEXES =>
>> >
>> > 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
>> > true
>> >
>> >  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
>> >
>> >
>> >  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
>> > FAMILIES => [{NAME => ...
>> > The INDEXES dictionary is still there. Could it create issues?
>> > -GS
>> > On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
>> >>
>> >> Do you have disk space to spare?  I'd think that all that is different
>> >> about indexed hbase is the WAL format.  If you had an hbase.rootdir
>> >> that was the product of a clean shutdown with no WALs to process, I'd
>> >> think you could just 0.90.x on top of it.  If you had the disk space
>> >> you could give it a go?
>> >> St.Ack
>> >>
>> >> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
>> >> <gs...@traackr.com> wrote:
>> >> > Hey folks,
>> >> >
>> >> > What would be the best approach for migrating away from a given
>> >> > region
>> >> > server implementation back to the default out-of-the box one? My goal
>> >> > here
>> >> > is to upgrade our cluster to 0.90 and migrate away from
>> >> > IndexedRegionServer
>> >> > back to the default HRegionServer.
>> >> >
>> >> > The only options that I know of at the moment are:
>> >> >
>> >> >   - Export/Import - pros: straightforward, cons: tedious, takes a
>> >> > long
>> >> > time
>> >> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the
>> >> > pros
>> >> >   and cons here because I've never used it. It seems to require that
>> >> > tables
>> >> >   are moved to a different setup. Can it be used in place?
>> >> >
>> >> > If folks know of a different way to do this, please let me know.
>> >> >
>> >> > Thank you in advance for your time.
>> >> >
>> >> > -GS
>> >> >
>> >
>> >
>
>

Re: Recommendation for migrating region server implementations

Posted by "George P. Stathis" <gs...@traackr.com>.
CLASSPATH is pristine:

George-Stathiss-MacBook-Pro logs gstathis$ hbase classpath
/opt/servers/hbase-current/conf
/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home/lib/tools.jar
/opt/servers/hbase-current
/opt/servers/hbase-current/hbase-0.90.1-CDH3B4-tests.jar
/opt/servers/hbase-current/hbase-0.90.1-CDH3B4.jar
/opt/servers/hbase-current/lib/avro-1.3.3.jar
/opt/servers/hbase-current/lib/commons-cli-1.2.jar
/opt/servers/hbase-current/lib/commons-codec-1.4.jar
/opt/servers/hbase-current/lib/commons-el-1.0.jar
/opt/servers/hbase-current/lib/commons-httpclient-3.1.jar
/opt/servers/hbase-current/lib/commons-lang-2.5.jar
/opt/servers/hbase-current/lib/commons-logging-1.1.1.jar
/opt/servers/hbase-current/lib/guava-r06.jar
/opt/servers/hbase-current/lib/hadoop-core-0.20.2-CDH3B4.jar
/opt/servers/hbase-current/lib/hadoop-gpl-compression-0.2.0-dev.jar
/opt/servers/hbase-current/lib/jasper-compiler-5.5.23.jar
/opt/servers/hbase-current/lib/jasper-runtime-5.5.23.jar
/opt/servers/hbase-current/lib/jaxb-api-2.1.jar
/opt/servers/hbase-current/lib/jersey-core-1.4.jar
/opt/servers/hbase-current/lib/jersey-json-1.4.jar
/opt/servers/hbase-current/lib/jersey-server-1.4.jar
/opt/servers/hbase-current/lib/jetty-6.1.26.jar
/opt/servers/hbase-current/lib/jetty-util-6.1.26.jar
/opt/servers/hbase-current/lib/jruby-complete-1.0.3.jar
/opt/servers/hbase-current/lib/jsp-2.1-6.1.14.jar
/opt/servers/hbase-current/lib/jsp-api-2.1-6.1.14.jar
/opt/servers/hbase-current/lib/jsr311-api-1.1.1.jar
/opt/servers/hbase-current/lib/log4j-1.2.16.jar
/opt/servers/hbase-current/lib/protobuf-java-2.3.0.jar
/opt/servers/hbase-current/lib/servlet-api-2.5-6.1.14.jar
/opt/servers/hbase-current/lib/slf4j-api-1.5.8.jar
/opt/servers/hbase-current/lib/slf4j-log4j12-1.5.8.jar
/opt/servers/hbase-current/lib/stax-api-1.0.1.jar
/opt/servers/hbase-current/lib/thrift-0.2.0.jar
/opt/servers/hbase-current/lib/zookeeper-3.3.2-CDH3B4.jar
/opt/servers/hadoop-current/conf

I'm just getting started with testing. I've only scratched the surface.
Also, I could always use IndexedTableDescriptor.removeIndex() to remove the
'exotics'.

-GS

On Thu, Mar 24, 2011 at 4:33 PM, Stack <st...@duboce.net> wrote:

> I'm not sure.  If it came up -- 'seems to work' -- then it looks like
> we just ignore the extra stuff (though, that seems a little odd... I'd
> expect the deserializing of these 'exotic's to throw an exception).
> Test more I'd say.  The shell you are using below is for sure from an
> untarnished hbase -- there is no indexedhbase in the CLASSPATH?
>
> St.Ack
>
> On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <gs...@traackr.com>
> wrote:
> > Ah, it seems to work, yes. I was thinking all along that it didn't
> because I
> > had setup a simple unit test that kept throwing this:
> > java.io.IOException: Unknown protocol to name node:
> > org.apache.hadoop.hbase.ipc.IndexedRegionInterface
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
> > I had upgraded locally on top of my old 0.89 hbase.root and it seemed to
> > work OK at first. Startup was clean and IRB shell was working. That unit
> > test kept throwing me off though. Then, I remembered that we have a
> custom
> > hbase-site.xml for our maven unit tests that was still referencing the
> > old IndexedRegionInterface. Removed that and problem solved.
> > So, does this mean that we can use the existing table definitions as-is?
> > They still contain old index definitions. E.g.
> > hbase(main):022:0> describe 'monitors'
> > DESCRIPTION
> >                                                        ENABLED
> >
> >  {NAME => 'monitors', INDEXES =>
> > 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
> > true
> >
>  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
> >
> >
>  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
> > FAMILIES => [{NAME => ...
> > The INDEXES dictionary is still there. Could it create issues?
> > -GS
> > On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
> >>
> >> Do you have disk space to spare?  I'd think that all that is different
> >> about indexed hbase is the WAL format.  If you had an hbase.rootdir
> >> that was the product of a clean shutdown with no WALs to process, I'd
> >> think you could just 0.90.x on top of it.  If you had the disk space
> >> you could give it a go?
> >> St.Ack
> >>
> >> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
> >> <gs...@traackr.com> wrote:
> >> > Hey folks,
> >> >
> >> > What would be the best approach for migrating away from a given region
> >> > server implementation back to the default out-of-the box one? My goal
> >> > here
> >> > is to upgrade our cluster to 0.90 and migrate away from
> >> > IndexedRegionServer
> >> > back to the default HRegionServer.
> >> >
> >> > The only options that I know of at the moment are:
> >> >
> >> >   - Export/Import - pros: straightforward, cons: tedious, takes a long
> >> > time
> >> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the
> >> > pros
> >> >   and cons here because I've never used it. It seems to require that
> >> > tables
> >> >   are moved to a different setup. Can it be used in place?
> >> >
> >> > If folks know of a different way to do this, please let me know.
> >> >
> >> > Thank you in advance for your time.
> >> >
> >> > -GS
> >> >
> >
> >
>

Re: Recommendation for migrating region server implementations

Posted by Stack <st...@duboce.net>.
I'm not sure.  If it came up -- 'seems to work' -- then it looks like
we just ignore the extra stuff (though, that seems a little odd... I'd
expect the deserializing of these 'exotic's to throw an exception).
Test more I'd say.  The shell you are using below is for sure from an
untarnished hbase -- there is no indexedhbase in the CLASSPATH?

St.Ack

On Thu, Mar 24, 2011 at 1:30 PM, George P. Stathis <gs...@traackr.com> wrote:
> Ah, it seems to work, yes. I was thinking all along that it didn't because I
> had setup a simple unit test that kept throwing this:
> java.io.IOException: Unknown protocol to name node:
> org.apache.hadoop.hbase.ipc.IndexedRegionInterface
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)
> I had upgraded locally on top of my old 0.89 hbase.root and it seemed to
> work OK at first. Startup was clean and IRB shell was working. That unit
> test kept throwing me off though. Then, I remembered that we have a custom
> hbase-site.xml for our maven unit tests that was still referencing the
> old IndexedRegionInterface. Removed that and problem solved.
> So, does this mean that we can use the existing table definitions as-is?
> They still contain old index definitions. E.g.
> hbase(main):022:0> describe 'monitors'
> DESCRIPTION
>                                                        ENABLED
>
>  {NAME => 'monitors', INDEXES =>
> 'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
> true
>  neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable
>
>  0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
> FAMILIES => [{NAME => ...
> The INDEXES dictionary is still there. Could it create issues?
> -GS
> On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:
>>
>> Do you have disk space to spare?  I'd think that all that is different
>> about indexed hbase is the WAL format.  If you had an hbase.rootdir
>> that was the product of a clean shutdown with no WALs to process, I'd
>> think you could just 0.90.x on top of it.  If you had the disk space
>> you could give it a go?
>> St.Ack
>>
>> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
>> <gs...@traackr.com> wrote:
>> > Hey folks,
>> >
>> > What would be the best approach for migrating away from a given region
>> > server implementation back to the default out-of-the box one? My goal
>> > here
>> > is to upgrade our cluster to 0.90 and migrate away from
>> > IndexedRegionServer
>> > back to the default HRegionServer.
>> >
>> > The only options that I know of at the moment are:
>> >
>> >   - Export/Import - pros: straightforward, cons: tedious, takes a long
>> > time
>> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the
>> > pros
>> >   and cons here because I've never used it. It seems to require that
>> > tables
>> >   are moved to a different setup. Can it be used in place?
>> >
>> > If folks know of a different way to do this, please let me know.
>> >
>> > Thank you in advance for your time.
>> >
>> > -GS
>> >
>
>

Re: Recommendation for migrating region server implementations

Posted by "George P. Stathis" <gs...@traackr.com>.
Ah, it seems to work, yes. I was thinking all along that it didn't because I
had setup a simple unit test that kept throwing this:

java.io.IOException: Unknown protocol to name node:
org.apache.hadoop.hbase.ipc.IndexedRegionInterface
 at
org.apache.hadoop.hbase.regionserver.HRegionServer.getProtocolVersion(HRegionServer.java:2400)

I had upgraded locally on top of my old 0.89 hbase.root and it seemed to
work OK at first. Startup was clean and IRB shell was working. That unit
test kept throwing me off though. Then, I remembered that we have a custom
hbase-site.xml for our maven unit tests that was still referencing the
old IndexedRegionInterface. Removed that and problem solved.

So, does this mean that we can use the existing table definitions as-is?
They still contain old index definitions. E.g.

hbase(main):022:0> describe 'monitors'
DESCRIPTION
                                                       ENABLED

 {NAME => 'monitors', INDEXES =>
'fooattributes:foo=org.apache.hadoop.hbase.client.tableindexed.IndexKeyGe
true
 neratorEorg.apache.hadoop.hbase.client.tableindexed.RowBasedIndexKeyGeneratorattributes:fooorg.apache.hadoop.io.Writable

 0org.apache.hadoop.io.ObjectWritable$NullInstance'org.apache.hadoop.io.WritableComparable',
FAMILIES => [{NAME => ...

The INDEXES dictionary is still there. Could it create issues?

-GS

On Thu, Mar 24, 2011 at 3:54 PM, Stack <st...@duboce.net> wrote:

> Do you have disk space to spare?  I'd think that all that is different
> about indexed hbase is the WAL format.  If you had an hbase.rootdir
> that was the product of a clean shutdown with no WALs to process, I'd
> think you could just 0.90.x on top of it.  If you had the disk space
> you could give it a go?
> St.Ack
>
> On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
> <gs...@traackr.com> wrote:
> > Hey folks,
> >
> > What would be the best approach for migrating away from a given region
> > server implementation back to the default out-of-the box one? My goal
> here
> > is to upgrade our cluster to 0.90 and migrate away from
> IndexedRegionServer
> > back to the default HRegionServer.
> >
> > The only options that I know of at the moment are:
> >
> >   - Export/Import - pros: straightforward, cons: tedious, takes a long
> time
> >   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the pros
> >   and cons here because I've never used it. It seems to require that
> tables
> >   are moved to a different setup. Can it be used in place?
> >
> > If folks know of a different way to do this, please let me know.
> >
> > Thank you in advance for your time.
> >
> > -GS
> >
>

Re: Recommendation for migrating region server implementations

Posted by Stack <st...@duboce.net>.
Do you have disk space to spare?  I'd think that all that is different
about indexed hbase is the WAL format.  If you had an hbase.rootdir
that was the product of a clean shutdown with no WALs to process, I'd
think you could just 0.90.x on top of it.  If you had the disk space
you could give it a go?
St.Ack

On Thu, Mar 24, 2011 at 12:15 PM, George P. Stathis
<gs...@traackr.com> wrote:
> Hey folks,
>
> What would be the best approach for migrating away from a given region
> server implementation back to the default out-of-the box one? My goal here
> is to upgrade our cluster to 0.90 and migrate away from IndexedRegionServer
> back to the default HRegionServer.
>
> The only options that I know of at the moment are:
>
>   - Export/Import - pros: straightforward, cons: tedious, takes a long time
>   - org.apache.hadoop.hbase.mapreduce.CopyTable - not sure about the pros
>   and cons here because I've never used it. It seems to require that tables
>   are moved to a different setup. Can it be used in place?
>
> If folks know of a different way to do this, please let me know.
>
> Thank you in advance for your time.
>
> -GS
>