You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by Ted Yu <yu...@gmail.com> on 2014/08/14 17:38:45 UTC

Re: Region not assigned

Adding Phoenix dev@



On Thu, Aug 14, 2014 at 8:05 AM, Kristoffer Sjögren <st...@gmail.com>
wrote:

> It seems that the region servers are complaining about wrong phoenix
> classes for some reason. We are running 2.2.0 which is the version before
> phoenix was moved to apache.
>
> But looking at the regionserver logs are stuck complaining about
> "org.apache.phoenix.coprocessor.MetaDataEndpointImpl" which IS the apache
> version? We might have connected with a newer client - but how could this
> trigger this?
>
>
> 2014-08-14 17:01:40,052 DEBUG
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
> class org.apache.phoenix.coprocessor.ServerCachingEndpointImpl with path
> null and priority 1
> 2014-08-14 17:01:40,053 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
> 'coprocessor$12' has invalid coprocessor specification
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|'
> 2014-08-14 17:01:40,053 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> java.io.IOException: No jar path specified for
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
> at
>
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> 2014-08-14 17:01:40,053 DEBUG
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
> class org.apache.phoenix.coprocessor.MetaDataEndpointImpl with path null
> and priority 1
> 2014-08-14 17:01:40,053 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
> 'coprocessor$13' has invalid coprocessor specification
> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|'
> 2014-08-14 17:01:40,053 WARN
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> java.io.IOException: No jar path specified for
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl
> at
>
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
> at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
>
>
> On Thu, Aug 14, 2014 at 4:31 PM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
> > Hi
> >
> > We are running hbase 0.94.6 cdh 4.4 and have a problem with one table not
> > being assigned to any region. This is the SYSTEM.TABLE in Phoenix so all
> > tables are basically non functional at the moment.
> >
> > When running hbck repair we get the following...
> >
> > ERROR: Region { meta =>
> > SYSTEM.TABLE,,1385477659546.b739cf1ef14dd664ae873bf5b16e7a35., hdfs =>
> > hdfs://primecluster/hbase/SYSTEM.TABLE/b739cf1ef14dd664ae873bf5b16e7a35,
> > deployed =>  } not deployed on any region server.
> >
> > And when hbck -repair (hbck_repair.txt attachment) it never assigns a
> > region and everything seems stuck.
> >
> > Master log keeps spewing out the following at very rapid pace (master.log
> > attachement)...
> >
> > Any pointers?
> >
> > Cheers,
> > -Kristoffer
> >
> >
>

Re: Region not assigned

Posted by Kristoffer Sjögren <st...@gmail.com>.
Seems like upgrading both client and server to version 2.2.3 might have
done the trick.

Thanks James!


On Fri, Aug 15, 2014 at 2:16 AM, James Taylor <ja...@apache.org>
wrote:

> On the first connection to the cluster when you've installed Phoenix
> 2.2.3 and were previously using Phoenix 2.2.2, Phoenix will upgrade
> your Phoenix tables to use the new coprocessor names
> (org.apache.phoenix.*) instead of the old coprocessor names
> (com.salesforce.phoenix.*).
>
> Thanks,
> James
>
> On Thu, Aug 14, 2014 at 8:38 AM, Ted Yu <yu...@gmail.com> wrote:
> > Adding Phoenix dev@
> >
> >
> >
> > On Thu, Aug 14, 2014 at 8:05 AM, Kristoffer Sjögren <st...@gmail.com>
> > wrote:
> >
> >> It seems that the region servers are complaining about wrong phoenix
> >> classes for some reason. We are running 2.2.0 which is the version
> before
> >> phoenix was moved to apache.
> >>
> >> But looking at the regionserver logs are stuck complaining about
> >> "org.apache.phoenix.coprocessor.MetaDataEndpointImpl" which IS the
> apache
> >> version? We might have connected with a newer client - but how could
> this
> >> trigger this?
> >>
> >>
> >> 2014-08-14 17:01:40,052 DEBUG
> >> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
> >> class org.apache.phoenix.coprocessor.ServerCachingEndpointImpl with path
> >> null and priority 1
> >> 2014-08-14 17:01:40,053 WARN
> >> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
> >> 'coprocessor$12' has invalid coprocessor specification
> >> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|'
> >> 2014-08-14 17:01:40,053 WARN
> >> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> >> java.io.IOException: No jar path specified for
> >> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
> >> at
> >>
> >>
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
> >> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
> >> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown
> Source)
> >> at
> >>
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
> >> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >> at java.lang.Thread.run(Thread.java:662)
> >>
> >> 2014-08-14 17:01:40,053 DEBUG
> >> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
> >> class org.apache.phoenix.coprocessor.MetaDataEndpointImpl with path null
> >> and priority 1
> >> 2014-08-14 17:01:40,053 WARN
> >> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
> >> 'coprocessor$13' has invalid coprocessor specification
> >> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|'
> >> 2014-08-14 17:01:40,053 WARN
> >> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
> >> java.io.IOException: No jar path specified for
> >> org.apache.phoenix.coprocessor.MetaDataEndpointImpl
> >> at
> >>
> >>
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
> >> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
> >> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown
> Source)
> >> at
> >>
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
> >> at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
> >> at
> >>
> >>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
> >> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >> at
> >>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >> at java.lang.Thread.run(Thread.java:662)
> >>
> >>
> >>
> >> On Thu, Aug 14, 2014 at 4:31 PM, Kristoffer Sjögren <st...@gmail.com>
> >> wrote:
> >>
> >> > Hi
> >> >
> >> > We are running hbase 0.94.6 cdh 4.4 and have a problem with one table
> not
> >> > being assigned to any region. This is the SYSTEM.TABLE in Phoenix so
> all
> >> > tables are basically non functional at the moment.
> >> >
> >> > When running hbck repair we get the following...
> >> >
> >> > ERROR: Region { meta =>
> >> > SYSTEM.TABLE,,1385477659546.b739cf1ef14dd664ae873bf5b16e7a35., hdfs =>
> >> >
> hdfs://primecluster/hbase/SYSTEM.TABLE/b739cf1ef14dd664ae873bf5b16e7a35,
> >> > deployed =>  } not deployed on any region server.
> >> >
> >> > And when hbck -repair (hbck_repair.txt attachment) it never assigns a
> >> > region and everything seems stuck.
> >> >
> >> > Master log keeps spewing out the following at very rapid pace
> (master.log
> >> > attachement)...
> >> >
> >> > Any pointers?
> >> >
> >> > Cheers,
> >> > -Kristoffer
> >> >
> >> >
> >>
>

Re: Region not assigned

Posted by James Taylor <ja...@apache.org>.
On the first connection to the cluster when you've installed Phoenix
2.2.3 and were previously using Phoenix 2.2.2, Phoenix will upgrade
your Phoenix tables to use the new coprocessor names
(org.apache.phoenix.*) instead of the old coprocessor names
(com.salesforce.phoenix.*).

Thanks,
James

On Thu, Aug 14, 2014 at 8:38 AM, Ted Yu <yu...@gmail.com> wrote:
> Adding Phoenix dev@
>
>
>
> On Thu, Aug 14, 2014 at 8:05 AM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> It seems that the region servers are complaining about wrong phoenix
>> classes for some reason. We are running 2.2.0 which is the version before
>> phoenix was moved to apache.
>>
>> But looking at the regionserver logs are stuck complaining about
>> "org.apache.phoenix.coprocessor.MetaDataEndpointImpl" which IS the apache
>> version? We might have connected with a newer client - but how could this
>> trigger this?
>>
>>
>> 2014-08-14 17:01:40,052 DEBUG
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
>> class org.apache.phoenix.coprocessor.ServerCachingEndpointImpl with path
>> null and priority 1
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
>> 'coprocessor$12' has invalid coprocessor specification
>> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|'
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
>> java.io.IOException: No jar path specified for
>> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
>> at
>>
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
>> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
>> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
>> at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
>> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>>
>> 2014-08-14 17:01:40,053 DEBUG
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
>> class org.apache.phoenix.coprocessor.MetaDataEndpointImpl with path null
>> and priority 1
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
>> 'coprocessor$13' has invalid coprocessor specification
>> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|'
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
>> java.io.IOException: No jar path specified for
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl
>> at
>>
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
>> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
>> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
>> at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
>> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>>
>>
>>
>> On Thu, Aug 14, 2014 at 4:31 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>>
>> > Hi
>> >
>> > We are running hbase 0.94.6 cdh 4.4 and have a problem with one table not
>> > being assigned to any region. This is the SYSTEM.TABLE in Phoenix so all
>> > tables are basically non functional at the moment.
>> >
>> > When running hbck repair we get the following...
>> >
>> > ERROR: Region { meta =>
>> > SYSTEM.TABLE,,1385477659546.b739cf1ef14dd664ae873bf5b16e7a35., hdfs =>
>> > hdfs://primecluster/hbase/SYSTEM.TABLE/b739cf1ef14dd664ae873bf5b16e7a35,
>> > deployed =>  } not deployed on any region server.
>> >
>> > And when hbck -repair (hbck_repair.txt attachment) it never assigns a
>> > region and everything seems stuck.
>> >
>> > Master log keeps spewing out the following at very rapid pace (master.log
>> > attachement)...
>> >
>> > Any pointers?
>> >
>> > Cheers,
>> > -Kristoffer
>> >
>> >
>>

Re: Region not assigned

Posted by James Taylor <ja...@apache.org>.
On the first connection to the cluster when you've installed Phoenix
2.2.3 and were previously using Phoenix 2.2.2, Phoenix will upgrade
your Phoenix tables to use the new coprocessor names
(org.apache.phoenix.*) instead of the old coprocessor names
(com.salesforce.phoenix.*).

Thanks,
James

On Thu, Aug 14, 2014 at 8:38 AM, Ted Yu <yu...@gmail.com> wrote:
> Adding Phoenix dev@
>
>
>
> On Thu, Aug 14, 2014 at 8:05 AM, Kristoffer Sjögren <st...@gmail.com>
> wrote:
>
>> It seems that the region servers are complaining about wrong phoenix
>> classes for some reason. We are running 2.2.0 which is the version before
>> phoenix was moved to apache.
>>
>> But looking at the regionserver logs are stuck complaining about
>> "org.apache.phoenix.coprocessor.MetaDataEndpointImpl" which IS the apache
>> version? We might have connected with a newer client - but how could this
>> trigger this?
>>
>>
>> 2014-08-14 17:01:40,052 DEBUG
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
>> class org.apache.phoenix.coprocessor.ServerCachingEndpointImpl with path
>> null and priority 1
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
>> 'coprocessor$12' has invalid coprocessor specification
>> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|1|'
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
>> java.io.IOException: No jar path specified for
>> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
>> at
>>
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
>> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
>> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
>> at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
>> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>>
>> 2014-08-14 17:01:40,053 DEBUG
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Loading coprocessor
>> class org.apache.phoenix.coprocessor.MetaDataEndpointImpl with path null
>> and priority 1
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: attribute
>> 'coprocessor$13' has invalid coprocessor specification
>> '|org.apache.phoenix.coprocessor.MetaDataEndpointImpl|1|'
>> 2014-08-14 17:01:40,053 WARN
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost:
>> java.io.IOException: No jar path specified for
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl
>> at
>>
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:183)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:190)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:154)
>> at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:474)
>> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
>> at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4084)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4267)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:329)
>> at
>>
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:100)
>> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>>
>>
>>
>> On Thu, Aug 14, 2014 at 4:31 PM, Kristoffer Sjögren <st...@gmail.com>
>> wrote:
>>
>> > Hi
>> >
>> > We are running hbase 0.94.6 cdh 4.4 and have a problem with one table not
>> > being assigned to any region. This is the SYSTEM.TABLE in Phoenix so all
>> > tables are basically non functional at the moment.
>> >
>> > When running hbck repair we get the following...
>> >
>> > ERROR: Region { meta =>
>> > SYSTEM.TABLE,,1385477659546.b739cf1ef14dd664ae873bf5b16e7a35., hdfs =>
>> > hdfs://primecluster/hbase/SYSTEM.TABLE/b739cf1ef14dd664ae873bf5b16e7a35,
>> > deployed =>  } not deployed on any region server.
>> >
>> > And when hbck -repair (hbck_repair.txt attachment) it never assigns a
>> > region and everything seems stuck.
>> >
>> > Master log keeps spewing out the following at very rapid pace (master.log
>> > attachement)...
>> >
>> > Any pointers?
>> >
>> > Cheers,
>> > -Kristoffer
>> >
>> >
>>