You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by anil gupta <an...@gmail.com> on 2015/03/06 00:32:31 UTC

HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Hi All,

I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
Phoenix4.1 client because i could not find tar file for
"Phoenix4-0.0-incubating".
I tried to create a view on existing table and then my entire cluster went
down(all the RS went down. MAster is still up).


This is the exception i am seeing:

2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
regionserver.HRegionServer: ABORTING region server
bigdatabox.com,60020,1423589420136: The coprocessor
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an
unexpected exception
java.io.IOException: No jar path specified for
org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
        at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)


We tried to restart the cluster. It died again. It seems, its stucks
at this point looking for

LocalIndexSplitter class. How can i resolve this error? We cant do
anything in the cluster until we fix it.

I was thinking of disabling those tables but none of the RS is coming
up. Can anyone suggest me how can i bail out of this BAD situation.


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Is HDP2.1.5 compatible with Phoenix4.1? If yes, i'll go ahead and put 4.1
on all RS. Thanks everyone for quick response.

On Thu, Mar 5, 2015 at 3:47 PM, Nick Dimiduk <nd...@gmail.com> wrote:

> You need to update the phoenix jar on the servers to match the client
> version. Both client and server should be the same versions for now, at
> least until our backward compatibility story is more reliable.
>
> Basically, the new client wrote new metadata to hbase schema and the old
> server jars don't have what's needed at runtime.
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>> Phoenix4.1 client because i could not find tar file for
>> "Phoenix4-0.0-incubating".
>> I tried to create a view on existing table and then my entire cluster
>> went down(all the RS went down. MAster is still up).
>>
>>
>> This is the exception i am seeing:
>>
>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>>
>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>
>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>
>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Is HDP2.1.5 compatible with Phoenix4.1? If yes, i'll go ahead and put 4.1
on all RS. Thanks everyone for quick response.

On Thu, Mar 5, 2015 at 3:47 PM, Nick Dimiduk <nd...@gmail.com> wrote:

> You need to update the phoenix jar on the servers to match the client
> version. Both client and server should be the same versions for now, at
> least until our backward compatibility story is more reliable.
>
> Basically, the new client wrote new metadata to hbase schema and the old
> server jars don't have what's needed at runtime.
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>> Phoenix4.1 client because i could not find tar file for
>> "Phoenix4-0.0-incubating".
>> I tried to create a view on existing table and then my entire cluster
>> went down(all the RS went down. MAster is still up).
>>
>>
>> This is the exception i am seeing:
>>
>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>>
>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>
>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>
>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
You need to update the phoenix jar on the servers to match the client
version. Both client and server should be the same versions for now, at
least until our backward compatibility story is more reliable.

Basically, the new client wrote new metadata to hbase schema and the old
server jars don't have what's needed at runtime.

On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:

> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
You need to update the phoenix jar on the servers to match the client
version. Both client and server should be the same versions for now, at
least until our backward compatibility story is more reliable.

Basically, the new client wrote new metadata to hbase schema and the old
server jars don't have what's needed at runtime.

On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:

> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
As a terrible hack, you may be able to create a jar containing a noop
coprocessor called org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
(make sure it extends BaseRegionObserver) and drop it into the RS lib
directory, restart the process. That should satisfy this "table descriptor
death pill", allowing it to come up far enough that you can remove the
coprocessor using the alter table procedure i described earlier.

-n

On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:

> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> > I have tried to disable the table but since none of the RS are coming up.
> > I am unable to do it. Am i missing something?
> > On the server side, we were using the "4.0.0-incubating". It seems like
> my
> > only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> > to be UP. I just want my cluster to come and then i will disable the
> table
> > that has a Phoenix view.
> > What would be the possible side effects of using Phoenix 4.1 with
> > HDP2.1.5.
> > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > the next alternative?
> >
> >
> > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
> >
> >> Hi Anil,
> >>
> >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> >> trying out a newer version? As James says, the upgrade must be servers
> >> first, then client. Also, Phoenix versions tend to be picky about their
> >> underlying HBase version.
> >>
> >> You can also try altering the now-broken phoenix tables via HBase shell,
> >> removing the phoenix coprocessor. I've tried this in the past with other
> >> coprocessor-loading woes and had mixed results. Try: disable table,
> alter
> >> table, enable table. There's still sharp edges around coprocessor-based
> >> deployment.
> >>
> >> Keep us posted, and sorry for the mess.
> >>
> >> -n
> >>
> >> [0]:
> >>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >>
> >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >>> Unfortunately, we ran out of luck on this one because we are not
> running
> >>> the latest version of HBase. This property was introduced recently:
> >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >>> Thanks, Vladimir.
> >>>
> >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >>> vladrodionov@gmail.com> wrote:
> >>>
> >>>> Try the following:
> >>>>
> >>>> Update hbase-site.xml config, set
> >>>>
> >>>> hbase.coprocessor.enabed=false
> >>>>
> >>>> or:
> >>>>
> >>>> hbase.coprocessor.user.enabed=false
> >>>>
> >>>> sync config across cluster.
> >>>>
> >>>> restart the cluster
> >>>>
> >>>> than update your table's settings in hbase shell
> >>>>
> >>>> -Vlad
> >>>>
> >>>>
> >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Hi All,
> >>>>>
> >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> >>>>> Phoenix4.1 client because i could not find tar file for
> >>>>> "Phoenix4-0.0-incubating".
> >>>>> I tried to create a view on existing table and then my entire cluster
> >>>>> went down(all the RS went down. MAster is still up).
> >>>>>
> >>>>>
> >>>>> This is the exception i am seeing:
> >>>>>
> >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136:
> The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> threw an unexpected exception
> >>>>> java.io.IOException: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >>>>>         at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >>>>>         at
> sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >>>>>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>>>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >>>>>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>         at java.lang.Thread.run(Thread.java:744)
> >>>>>
> >>>>>
> >>>>> We tried to restart the cluster. It died again. It seems, its stucks
> at this point looking for
> >>>>>
> >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> anything in the cluster until we fix it.
> >>>>>
> >>>>> I was thinking of disabling those tables but none of the RS is
> coming up. Can anyone suggest me how can i bail out of this BAD situation.
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Thanks & Regards,
> >>>>> Anil Gupta
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Thanks & Regards,
> >>> Anil Gupta
> >>>
> >>
> >>
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
4.0.1 failed with a JUnit failure:
Failed tests:   testSkipScan(org.apache.phoenix.end2end.VariableLengthPKIT)
Tests run: 1115, Failures: 1, Errors: 0, Skipped: 4

I disabled JUnit from the build and it ran successfully.


On Fri, Mar 6, 2015 at 1:49 PM, anil gupta <an...@gmail.com> wrote:

> Update: I checked out 4.0.1 branch from git and local build is underway.
>
> On Fri, Mar 6, 2015 at 12:50 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi James/Mujtaba,
>>
>> I am giving a tech talk of HBase on Monday morning. I wanted to demo
>> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
>> office hours because i am dependent on other team to do it. If i can get
>> the jar in 1-2 hours. I would really appreciate it.
>>
>> Thanks,
>> Anil Gupta
>>
>>
>> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
>> wrote:
>>
>>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>>
>>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> > Hi Ted,
>>> >
>>> > In morning today, I downloaded 4.1 from the link you provided. The
>>> problem
>>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade
>>> to 4.0)
>>> > as my client.
>>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>>> compatible
>>> > version with HDP2.1.5(6 month old release of HDP)
>>> >
>>> > Thanks,
>>> > Anil Gupta
>>> >
>>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>> >>
>>> >> Ani:
>>> >> You can find Phoenix release artifacts here:
>>> >> http://archive.apache.org/dist/phoenix/
>>> >>
>>> >> e.g. for 4.1.0:
>>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>> >>
>>> >> Cheers
>>> >>
>>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> >>
>>> >> > @James: Could you point me to a place where i can find tar file of
>>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>>> broken:
>>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>>> >> >
>>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>>> >> > wrote:
>>> >> >
>>> >> > > I have tried to disable the table but since none of the RS are
>>> coming
>>> >> > > up.
>>> >> > > I am unable to do it. Am i missing something?
>>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>>> >> > > like
>>> >> > my
>>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>>> >> > > cluster
>>> >> > > to be UP. I just want my cluster to come and then i will disable
>>> the
>>> >> > table
>>> >> > > that has a Phoenix view.
>>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>>> >> > > HDP2.1.5.
>>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>>> What
>>> >> > > is
>>> >> > > the next alternative?
>>> >> > >
>>> >> > >
>>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>>> >> > > wrote:
>>> >> > >
>>> >> > >> Hi Anil,
>>> >> > >>
>>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>>> shipped,
>>> >> > >> or
>>> >> > >> trying out a newer version? As James says, the upgrade must be
>>> >> > >> servers
>>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>>> >> > >> their
>>> >> > >> underlying HBase version.
>>> >> > >>
>>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>>> >> > >> shell,
>>> >> > >> removing the phoenix coprocessor. I've tried this in the past
>>> with
>>> >> > >> other
>>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>>> table,
>>> >> > alter
>>> >> > >> table, enable table. There's still sharp edges around
>>> >> > >> coprocessor-based
>>> >> > >> deployment.
>>> >> > >>
>>> >> > >> Keep us posted, and sorry for the mess.
>>> >> > >>
>>> >> > >> -n
>>> >> > >>
>>> >> > >> [0]:
>>> >> > >>
>>> >> >
>>> >> >
>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>> >> > >>
>>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > wrote:
>>> >> > >>
>>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>>> >> > running
>>> >> > >>> the latest version of HBase. This property was introduced
>>> recently:
>>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>> >> > >>> Thanks, Vladimir.
>>> >> > >>>
>>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>> >> > >>> vladrodionov@gmail.com> wrote:
>>> >> > >>>
>>> >> > >>>> Try the following:
>>> >> > >>>>
>>> >> > >>>> Update hbase-site.xml config, set
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.enabed=false
>>> >> > >>>>
>>> >> > >>>> or:
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.user.enabed=false
>>> >> > >>>>
>>> >> > >>>> sync config across cluster.
>>> >> > >>>>
>>> >> > >>>> restart the cluster
>>> >> > >>>>
>>> >> > >>>> than update your table's settings in hbase shell
>>> >> > >>>>
>>> >> > >>>> -Vlad
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > >>>> wrote:
>>> >> > >>>>
>>> >> > >>>>> Hi All,
>>> >> > >>>>>
>>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>>> >> > >>>>> running
>>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>>> >> > >>>>> "Phoenix4-0.0-incubating".
>>> >> > >>>>> I tried to create a view on existing table and then my entire
>>> >> > >>>>> cluster
>>> >> > >>>>> went down(all the RS went down. MAster is still up).
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> This is the exception i am seeing:
>>> >> > >>>>>
>>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>>> [RS_OPEN_REGION-hdpslave8:60020-2]
>>> >> > regionserver.HRegionServer: ABORTING region server
>>> >> > bigdatabox.com,60020,1423589420136:
>>> >> > The coprocessor
>>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > threw an unexpected exception
>>> >> > >>>>> java.io.IOException: No jar path specified for
>>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>> >> > >>>>>         at
>>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>>> Source)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >> > >>>>>         at
>>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>>> >> > >>>>> stucks
>>> >> > at this point looking for
>>> >> > >>>>>
>>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>>> cant do
>>> >> > anything in the cluster until we fix it.
>>> >> > >>>>>
>>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>>> >> > situation.
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> --
>>> >> > >>>>> Thanks & Regards,
>>> >> > >>>>> Anil Gupta
>>> >> > >>>>>
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>
>>> >> > >>>
>>> >> > >>> --
>>> >> > >>> Thanks & Regards,
>>> >> > >>> Anil Gupta
>>> >> > >>>
>>> >> > >>
>>> >> > >>
>>> >> > >
>>> >> > >
>>> >> > > --
>>> >> > > Thanks & Regards,
>>> >> > > Anil Gupta
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Thanks & Regards,
>>> >> > Anil Gupta
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks & Regards,
>>> > Anil Gupta
>>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
4.0.1 failed with a JUnit failure:
Failed tests:   testSkipScan(org.apache.phoenix.end2end.VariableLengthPKIT)
Tests run: 1115, Failures: 1, Errors: 0, Skipped: 4

I disabled JUnit from the build and it ran successfully.


On Fri, Mar 6, 2015 at 1:49 PM, anil gupta <an...@gmail.com> wrote:

> Update: I checked out 4.0.1 branch from git and local build is underway.
>
> On Fri, Mar 6, 2015 at 12:50 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi James/Mujtaba,
>>
>> I am giving a tech talk of HBase on Monday morning. I wanted to demo
>> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
>> office hours because i am dependent on other team to do it. If i can get
>> the jar in 1-2 hours. I would really appreciate it.
>>
>> Thanks,
>> Anil Gupta
>>
>>
>> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
>> wrote:
>>
>>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>>
>>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> > Hi Ted,
>>> >
>>> > In morning today, I downloaded 4.1 from the link you provided. The
>>> problem
>>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade
>>> to 4.0)
>>> > as my client.
>>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>>> compatible
>>> > version with HDP2.1.5(6 month old release of HDP)
>>> >
>>> > Thanks,
>>> > Anil Gupta
>>> >
>>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>> >>
>>> >> Ani:
>>> >> You can find Phoenix release artifacts here:
>>> >> http://archive.apache.org/dist/phoenix/
>>> >>
>>> >> e.g. for 4.1.0:
>>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>> >>
>>> >> Cheers
>>> >>
>>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> >>
>>> >> > @James: Could you point me to a place where i can find tar file of
>>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>>> broken:
>>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>>> >> >
>>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>>> >> > wrote:
>>> >> >
>>> >> > > I have tried to disable the table but since none of the RS are
>>> coming
>>> >> > > up.
>>> >> > > I am unable to do it. Am i missing something?
>>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>>> >> > > like
>>> >> > my
>>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>>> >> > > cluster
>>> >> > > to be UP. I just want my cluster to come and then i will disable
>>> the
>>> >> > table
>>> >> > > that has a Phoenix view.
>>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>>> >> > > HDP2.1.5.
>>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>>> What
>>> >> > > is
>>> >> > > the next alternative?
>>> >> > >
>>> >> > >
>>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>>> >> > > wrote:
>>> >> > >
>>> >> > >> Hi Anil,
>>> >> > >>
>>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>>> shipped,
>>> >> > >> or
>>> >> > >> trying out a newer version? As James says, the upgrade must be
>>> >> > >> servers
>>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>>> >> > >> their
>>> >> > >> underlying HBase version.
>>> >> > >>
>>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>>> >> > >> shell,
>>> >> > >> removing the phoenix coprocessor. I've tried this in the past
>>> with
>>> >> > >> other
>>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>>> table,
>>> >> > alter
>>> >> > >> table, enable table. There's still sharp edges around
>>> >> > >> coprocessor-based
>>> >> > >> deployment.
>>> >> > >>
>>> >> > >> Keep us posted, and sorry for the mess.
>>> >> > >>
>>> >> > >> -n
>>> >> > >>
>>> >> > >> [0]:
>>> >> > >>
>>> >> >
>>> >> >
>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>> >> > >>
>>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > wrote:
>>> >> > >>
>>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>>> >> > running
>>> >> > >>> the latest version of HBase. This property was introduced
>>> recently:
>>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>> >> > >>> Thanks, Vladimir.
>>> >> > >>>
>>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>> >> > >>> vladrodionov@gmail.com> wrote:
>>> >> > >>>
>>> >> > >>>> Try the following:
>>> >> > >>>>
>>> >> > >>>> Update hbase-site.xml config, set
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.enabed=false
>>> >> > >>>>
>>> >> > >>>> or:
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.user.enabed=false
>>> >> > >>>>
>>> >> > >>>> sync config across cluster.
>>> >> > >>>>
>>> >> > >>>> restart the cluster
>>> >> > >>>>
>>> >> > >>>> than update your table's settings in hbase shell
>>> >> > >>>>
>>> >> > >>>> -Vlad
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > >>>> wrote:
>>> >> > >>>>
>>> >> > >>>>> Hi All,
>>> >> > >>>>>
>>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>>> >> > >>>>> running
>>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>>> >> > >>>>> "Phoenix4-0.0-incubating".
>>> >> > >>>>> I tried to create a view on existing table and then my entire
>>> >> > >>>>> cluster
>>> >> > >>>>> went down(all the RS went down. MAster is still up).
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> This is the exception i am seeing:
>>> >> > >>>>>
>>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>>> [RS_OPEN_REGION-hdpslave8:60020-2]
>>> >> > regionserver.HRegionServer: ABORTING region server
>>> >> > bigdatabox.com,60020,1423589420136:
>>> >> > The coprocessor
>>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > threw an unexpected exception
>>> >> > >>>>> java.io.IOException: No jar path specified for
>>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>> >> > >>>>>         at
>>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>>> Source)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >> > >>>>>         at
>>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>>> >> > >>>>> stucks
>>> >> > at this point looking for
>>> >> > >>>>>
>>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>>> cant do
>>> >> > anything in the cluster until we fix it.
>>> >> > >>>>>
>>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>>> >> > situation.
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> --
>>> >> > >>>>> Thanks & Regards,
>>> >> > >>>>> Anil Gupta
>>> >> > >>>>>
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>
>>> >> > >>>
>>> >> > >>> --
>>> >> > >>> Thanks & Regards,
>>> >> > >>> Anil Gupta
>>> >> > >>>
>>> >> > >>
>>> >> > >>
>>> >> > >
>>> >> > >
>>> >> > > --
>>> >> > > Thanks & Regards,
>>> >> > > Anil Gupta
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Thanks & Regards,
>>> >> > Anil Gupta
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks & Regards,
>>> > Anil Gupta
>>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Update: I checked out 4.0.1 branch from git and local build is underway.

On Fri, Mar 6, 2015 at 12:50 PM, anil gupta <an...@gmail.com> wrote:

> Hi James/Mujtaba,
>
> I am giving a tech talk of HBase on Monday morning. I wanted to demo
> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
> office hours because i am dependent on other team to do it. If i can get
> the jar in 1-2 hours. I would really appreciate it.
>
> Thanks,
> Anil Gupta
>
>
> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
> wrote:
>
>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>
>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
>> > Hi Ted,
>> >
>> > In morning today, I downloaded 4.1 from the link you provided. The
>> problem
>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
>> 4.0)
>> > as my client.
>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>> compatible
>> > version with HDP2.1.5(6 month old release of HDP)
>> >
>> > Thanks,
>> > Anil Gupta
>> >
>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>> >>
>> >> Ani:
>> >> You can find Phoenix release artifacts here:
>> >> http://archive.apache.org/dist/phoenix/
>> >>
>> >> e.g. for 4.1.0:
>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>> >>
>> >> Cheers
>> >>
>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>> wrote:
>> >>
>> >> > @James: Could you point me to a place where i can find tar file of
>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>> broken:
>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>> >> >
>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > I have tried to disable the table but since none of the RS are
>> coming
>> >> > > up.
>> >> > > I am unable to do it. Am i missing something?
>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>> >> > > like
>> >> > my
>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>> >> > > cluster
>> >> > > to be UP. I just want my cluster to come and then i will disable
>> the
>> >> > table
>> >> > > that has a Phoenix view.
>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>> >> > > HDP2.1.5.
>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>> What
>> >> > > is
>> >> > > the next alternative?
>> >> > >
>> >> > >
>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>> >> > > wrote:
>> >> > >
>> >> > >> Hi Anil,
>> >> > >>
>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>> shipped,
>> >> > >> or
>> >> > >> trying out a newer version? As James says, the upgrade must be
>> >> > >> servers
>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>> >> > >> their
>> >> > >> underlying HBase version.
>> >> > >>
>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>> >> > >> shell,
>> >> > >> removing the phoenix coprocessor. I've tried this in the past with
>> >> > >> other
>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>> table,
>> >> > alter
>> >> > >> table, enable table. There's still sharp edges around
>> >> > >> coprocessor-based
>> >> > >> deployment.
>> >> > >>
>> >> > >> Keep us posted, and sorry for the mess.
>> >> > >>
>> >> > >> -n
>> >> > >>
>> >> > >> [0]:
>> >> > >>
>> >> >
>> >> >
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>> >> > >>
>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <anilgupta84@gmail.com
>> >
>> >> > wrote:
>> >> > >>
>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>> >> > running
>> >> > >>> the latest version of HBase. This property was introduced
>> recently:
>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> >> > >>> Thanks, Vladimir.
>> >> > >>>
>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>> >> > >>> vladrodionov@gmail.com> wrote:
>> >> > >>>
>> >> > >>>> Try the following:
>> >> > >>>>
>> >> > >>>> Update hbase-site.xml config, set
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.enabed=false
>> >> > >>>>
>> >> > >>>> or:
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.user.enabed=false
>> >> > >>>>
>> >> > >>>> sync config across cluster.
>> >> > >>>>
>> >> > >>>> restart the cluster
>> >> > >>>>
>> >> > >>>> than update your table's settings in hbase shell
>> >> > >>>>
>> >> > >>>> -Vlad
>> >> > >>>>
>> >> > >>>>
>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>> anilgupta84@gmail.com>
>> >> > >>>> wrote:
>> >> > >>>>
>> >> > >>>>> Hi All,
>> >> > >>>>>
>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>> >> > >>>>> running
>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>> >> > >>>>> "Phoenix4-0.0-incubating".
>> >> > >>>>> I tried to create a view on existing table and then my entire
>> >> > >>>>> cluster
>> >> > >>>>> went down(all the RS went down. MAster is still up).
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> This is the exception i am seeing:
>> >> > >>>>>
>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>> [RS_OPEN_REGION-hdpslave8:60020-2]
>> >> > regionserver.HRegionServer: ABORTING region server
>> >> > bigdatabox.com,60020,1423589420136:
>> >> > The coprocessor
>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > threw an unexpected exception
>> >> > >>>>> java.io.IOException: No jar path specified for
>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>> >> > >>>>>         at
>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>> Source)
>> >> > >>>>>         at
>> >> >
>> >> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >> > >>>>>         at
>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>> >> > >>>>>         at
>> >> >
>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>> >> > >>>>> stucks
>> >> > at this point looking for
>> >> > >>>>>
>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>> cant do
>> >> > anything in the cluster until we fix it.
>> >> > >>>>>
>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>> >> > situation.
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> --
>> >> > >>>>> Thanks & Regards,
>> >> > >>>>> Anil Gupta
>> >> > >>>>>
>> >> > >>>>
>> >> > >>>>
>> >> > >>>
>> >> > >>>
>> >> > >>> --
>> >> > >>> Thanks & Regards,
>> >> > >>> Anil Gupta
>> >> > >>>
>> >> > >>
>> >> > >>
>> >> > >
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Chandu <ch...@gmail.com>.
np, thanks for the update Anil.

On 8 March 2015 at 00:00, anil gupta <an...@gmail.com> wrote:

> Hi Chandu,
>
> Unfortunately, Its a company private event so i wont me able to make it
> public.
>
> Thanks,
> Anil Gupta
>
> On Sat, Mar 7, 2015 at 2:03 AM, Chandu <ch...@gmail.com> wrote:
>
>> Hi Anil,
>>
>> Is it a webinar? How can I join the meeting?
>>
>> Thanks,
>> Chandu.
>>
>> On 7 March 2015 at 02:20, anil gupta <an...@gmail.com> wrote:
>>
>>> Hi James/Mujtaba,
>>>
>>> I am giving a tech talk of HBase on Monday morning. I wanted to demo
>>> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
>>> office hours because i am dependent on other team to do it. If i can get
>>> the jar in 1-2 hours. I would really appreciate it.
>>>
>>> Thanks,
>>> Anil Gupta
>>>
>>>
>>> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
>>> wrote:
>>>
>>>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>>>
>>>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com>
>>>> wrote:
>>>> > Hi Ted,
>>>> >
>>>> > In morning today, I downloaded 4.1 from the link you provided. The
>>>> problem
>>>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>>>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade
>>>> to 4.0)
>>>> > as my client.
>>>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>>>> compatible
>>>> > version with HDP2.1.5(6 month old release of HDP)
>>>> >
>>>> > Thanks,
>>>> > Anil Gupta
>>>> >
>>>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>>> >>
>>>> >> Ani:
>>>> >> You can find Phoenix release artifacts here:
>>>> >> http://archive.apache.org/dist/phoenix/
>>>> >>
>>>> >> e.g. for 4.1.0:
>>>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>>> >>
>>>> >> Cheers
>>>> >>
>>>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>>>> wrote:
>>>> >>
>>>> >> > @James: Could you point me to a place where i can find tar file of
>>>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>>>> broken:
>>>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>>>> >> >
>>>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>>>> >> > wrote:
>>>> >> >
>>>> >> > > I have tried to disable the table but since none of the RS are
>>>> coming
>>>> >> > > up.
>>>> >> > > I am unable to do it. Am i missing something?
>>>> >> > > On the server side, we were using the "4.0.0-incubating". It
>>>> seems
>>>> >> > > like
>>>> >> > my
>>>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>>>> >> > > cluster
>>>> >> > > to be UP. I just want my cluster to come and then i will disable
>>>> the
>>>> >> > table
>>>> >> > > that has a Phoenix view.
>>>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>>>> >> > > HDP2.1.5.
>>>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>>>> What
>>>> >> > > is
>>>> >> > > the next alternative?
>>>> >> > >
>>>> >> > >
>>>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <ndimiduk@gmail.com
>>>> >
>>>> >> > > wrote:
>>>> >> > >
>>>> >> > >> Hi Anil,
>>>> >> > >>
>>>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>>>> shipped,
>>>> >> > >> or
>>>> >> > >> trying out a newer version? As James says, the upgrade must be
>>>> >> > >> servers
>>>> >> > >> first, then client. Also, Phoenix versions tend to be picky
>>>> about
>>>> >> > >> their
>>>> >> > >> underlying HBase version.
>>>> >> > >>
>>>> >> > >> You can also try altering the now-broken phoenix tables via
>>>> HBase
>>>> >> > >> shell,
>>>> >> > >> removing the phoenix coprocessor. I've tried this in the past
>>>> with
>>>> >> > >> other
>>>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>>>> table,
>>>> >> > alter
>>>> >> > >> table, enable table. There's still sharp edges around
>>>> >> > >> coprocessor-based
>>>> >> > >> deployment.
>>>> >> > >>
>>>> >> > >> Keep us posted, and sorry for the mess.
>>>> >> > >>
>>>> >> > >> -n
>>>> >> > >>
>>>> >> > >> [0]:
>>>> >> > >>
>>>> >> >
>>>> >> >
>>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>>> >> > >>
>>>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <
>>>> anilgupta84@gmail.com>
>>>> >> > wrote:
>>>> >> > >>
>>>> >> > >>> Unfortunately, we ran out of luck on this one because we are
>>>> not
>>>> >> > running
>>>> >> > >>> the latest version of HBase. This property was introduced
>>>> recently:
>>>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>>> >> > >>> Thanks, Vladimir.
>>>> >> > >>>
>>>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>>> >> > >>> vladrodionov@gmail.com> wrote:
>>>> >> > >>>
>>>> >> > >>>> Try the following:
>>>> >> > >>>>
>>>> >> > >>>> Update hbase-site.xml config, set
>>>> >> > >>>>
>>>> >> > >>>> hbase.coprocessor.enabed=false
>>>> >> > >>>>
>>>> >> > >>>> or:
>>>> >> > >>>>
>>>> >> > >>>> hbase.coprocessor.user.enabed=false
>>>> >> > >>>>
>>>> >> > >>>> sync config across cluster.
>>>> >> > >>>>
>>>> >> > >>>> restart the cluster
>>>> >> > >>>>
>>>> >> > >>>> than update your table's settings in hbase shell
>>>> >> > >>>>
>>>> >> > >>>> -Vlad
>>>> >> > >>>>
>>>> >> > >>>>
>>>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>>>> anilgupta84@gmail.com>
>>>> >> > >>>> wrote:
>>>> >> > >>>>
>>>> >> > >>>>> Hi All,
>>>> >> > >>>>>
>>>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>>>> >> > >>>>> running
>>>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>>>> >> > >>>>> "Phoenix4-0.0-incubating".
>>>> >> > >>>>> I tried to create a view on existing table and then my entire
>>>> >> > >>>>> cluster
>>>> >> > >>>>> went down(all the RS went down. MAster is still up).
>>>> >> > >>>>>
>>>> >> > >>>>>
>>>> >> > >>>>> This is the exception i am seeing:
>>>> >> > >>>>>
>>>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>>>> [RS_OPEN_REGION-hdpslave8:60020-2]
>>>> >> > regionserver.HRegionServer: ABORTING region server
>>>> >> > bigdatabox.com,60020,1423589420136:
>>>> >> > The coprocessor
>>>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>> >> > threw an unexpected exception
>>>> >> > >>>>> java.io.IOException: No jar path specified for
>>>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>> >> > >>>>>         at
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>> >> > >>>>>         at
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>> >> > >>>>>         at
>>>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>>>> Source)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>> >> > >>>>>         at
>>>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>> >> > >>>>>         at
>>>> >> >
>>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>> >> > >>>>>         at
>>>> >> >
>>>> >> >
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>>>> >> > >>>>>
>>>> >> > >>>>>
>>>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>>>> >> > >>>>> stucks
>>>> >> > at this point looking for
>>>> >> > >>>>>
>>>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>>>> cant do
>>>> >> > anything in the cluster until we fix it.
>>>> >> > >>>>>
>>>> >> > >>>>> I was thinking of disabling those tables but none of the RS
>>>> is
>>>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>>>> >> > situation.
>>>> >> > >>>>>
>>>> >> > >>>>>
>>>> >> > >>>>> --
>>>> >> > >>>>> Thanks & Regards,
>>>> >> > >>>>> Anil Gupta
>>>> >> > >>>>>
>>>> >> > >>>>
>>>> >> > >>>>
>>>> >> > >>>
>>>> >> > >>>
>>>> >> > >>> --
>>>> >> > >>> Thanks & Regards,
>>>> >> > >>> Anil Gupta
>>>> >> > >>>
>>>> >> > >>
>>>> >> > >>
>>>> >> > >
>>>> >> > >
>>>> >> > > --
>>>> >> > > Thanks & Regards,
>>>> >> > > Anil Gupta
>>>> >> > >
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > --
>>>> >> > Thanks & Regards,
>>>> >> > Anil Gupta
>>>> >> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Thanks & Regards,
>>>> > Anil Gupta
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>>
>> --
>> Cheers,
>> Chandu.
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Cheers,
Chandu.

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Hi Chandu,

Unfortunately, Its a company private event so i wont me able to make it
public.

Thanks,
Anil Gupta

On Sat, Mar 7, 2015 at 2:03 AM, Chandu <ch...@gmail.com> wrote:

> Hi Anil,
>
> Is it a webinar? How can I join the meeting?
>
> Thanks,
> Chandu.
>
> On 7 March 2015 at 02:20, anil gupta <an...@gmail.com> wrote:
>
>> Hi James/Mujtaba,
>>
>> I am giving a tech talk of HBase on Monday morning. I wanted to demo
>> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
>> office hours because i am dependent on other team to do it. If i can get
>> the jar in 1-2 hours. I would really appreciate it.
>>
>> Thanks,
>> Anil Gupta
>>
>>
>> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
>> wrote:
>>
>>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>>
>>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> > Hi Ted,
>>> >
>>> > In morning today, I downloaded 4.1 from the link you provided. The
>>> problem
>>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade
>>> to 4.0)
>>> > as my client.
>>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>>> compatible
>>> > version with HDP2.1.5(6 month old release of HDP)
>>> >
>>> > Thanks,
>>> > Anil Gupta
>>> >
>>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>> >>
>>> >> Ani:
>>> >> You can find Phoenix release artifacts here:
>>> >> http://archive.apache.org/dist/phoenix/
>>> >>
>>> >> e.g. for 4.1.0:
>>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>> >>
>>> >> Cheers
>>> >>
>>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>> >>
>>> >> > @James: Could you point me to a place where i can find tar file of
>>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>>> broken:
>>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>>> >> >
>>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>>> >> > wrote:
>>> >> >
>>> >> > > I have tried to disable the table but since none of the RS are
>>> coming
>>> >> > > up.
>>> >> > > I am unable to do it. Am i missing something?
>>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>>> >> > > like
>>> >> > my
>>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>>> >> > > cluster
>>> >> > > to be UP. I just want my cluster to come and then i will disable
>>> the
>>> >> > table
>>> >> > > that has a Phoenix view.
>>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>>> >> > > HDP2.1.5.
>>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>>> What
>>> >> > > is
>>> >> > > the next alternative?
>>> >> > >
>>> >> > >
>>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>>> >> > > wrote:
>>> >> > >
>>> >> > >> Hi Anil,
>>> >> > >>
>>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>>> shipped,
>>> >> > >> or
>>> >> > >> trying out a newer version? As James says, the upgrade must be
>>> >> > >> servers
>>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>>> >> > >> their
>>> >> > >> underlying HBase version.
>>> >> > >>
>>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>>> >> > >> shell,
>>> >> > >> removing the phoenix coprocessor. I've tried this in the past
>>> with
>>> >> > >> other
>>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>>> table,
>>> >> > alter
>>> >> > >> table, enable table. There's still sharp edges around
>>> >> > >> coprocessor-based
>>> >> > >> deployment.
>>> >> > >>
>>> >> > >> Keep us posted, and sorry for the mess.
>>> >> > >>
>>> >> > >> -n
>>> >> > >>
>>> >> > >> [0]:
>>> >> > >>
>>> >> >
>>> >> >
>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>> >> > >>
>>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > wrote:
>>> >> > >>
>>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>>> >> > running
>>> >> > >>> the latest version of HBase. This property was introduced
>>> recently:
>>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>> >> > >>> Thanks, Vladimir.
>>> >> > >>>
>>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>> >> > >>> vladrodionov@gmail.com> wrote:
>>> >> > >>>
>>> >> > >>>> Try the following:
>>> >> > >>>>
>>> >> > >>>> Update hbase-site.xml config, set
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.enabed=false
>>> >> > >>>>
>>> >> > >>>> or:
>>> >> > >>>>
>>> >> > >>>> hbase.coprocessor.user.enabed=false
>>> >> > >>>>
>>> >> > >>>> sync config across cluster.
>>> >> > >>>>
>>> >> > >>>> restart the cluster
>>> >> > >>>>
>>> >> > >>>> than update your table's settings in hbase shell
>>> >> > >>>>
>>> >> > >>>> -Vlad
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>>> anilgupta84@gmail.com>
>>> >> > >>>> wrote:
>>> >> > >>>>
>>> >> > >>>>> Hi All,
>>> >> > >>>>>
>>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>>> >> > >>>>> running
>>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>>> >> > >>>>> "Phoenix4-0.0-incubating".
>>> >> > >>>>> I tried to create a view on existing table and then my entire
>>> >> > >>>>> cluster
>>> >> > >>>>> went down(all the RS went down. MAster is still up).
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> This is the exception i am seeing:
>>> >> > >>>>>
>>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>>> [RS_OPEN_REGION-hdpslave8:60020-2]
>>> >> > regionserver.HRegionServer: ABORTING region server
>>> >> > bigdatabox.com,60020,1423589420136:
>>> >> > The coprocessor
>>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > threw an unexpected exception
>>> >> > >>>>> java.io.IOException: No jar path specified for
>>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>> >> > >>>>>         at
>>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>>> Source)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >> > >>>>>         at
>>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>> >> > >>>>>         at
>>> >> >
>>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >> > >>>>>         at
>>> >> >
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>>> >> > >>>>> stucks
>>> >> > at this point looking for
>>> >> > >>>>>
>>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>>> cant do
>>> >> > anything in the cluster until we fix it.
>>> >> > >>>>>
>>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>>> >> > situation.
>>> >> > >>>>>
>>> >> > >>>>>
>>> >> > >>>>> --
>>> >> > >>>>> Thanks & Regards,
>>> >> > >>>>> Anil Gupta
>>> >> > >>>>>
>>> >> > >>>>
>>> >> > >>>>
>>> >> > >>>
>>> >> > >>>
>>> >> > >>> --
>>> >> > >>> Thanks & Regards,
>>> >> > >>> Anil Gupta
>>> >> > >>>
>>> >> > >>
>>> >> > >>
>>> >> > >
>>> >> > >
>>> >> > > --
>>> >> > > Thanks & Regards,
>>> >> > > Anil Gupta
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Thanks & Regards,
>>> >> > Anil Gupta
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks & Regards,
>>> > Anil Gupta
>>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>
>
> --
> Cheers,
> Chandu.
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Chandu <ch...@gmail.com>.
Hi Anil,

Is it a webinar? How can I join the meeting?

Thanks,
Chandu.

On 7 March 2015 at 02:20, anil gupta <an...@gmail.com> wrote:

> Hi James/Mujtaba,
>
> I am giving a tech talk of HBase on Monday morning. I wanted to demo
> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
> office hours because i am dependent on other team to do it. If i can get
> the jar in 1-2 hours. I would really appreciate it.
>
> Thanks,
> Anil Gupta
>
>
> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
> wrote:
>
>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>
>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
>> > Hi Ted,
>> >
>> > In morning today, I downloaded 4.1 from the link you provided. The
>> problem
>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
>> 4.0)
>> > as my client.
>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>> compatible
>> > version with HDP2.1.5(6 month old release of HDP)
>> >
>> > Thanks,
>> > Anil Gupta
>> >
>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>> >>
>> >> Ani:
>> >> You can find Phoenix release artifacts here:
>> >> http://archive.apache.org/dist/phoenix/
>> >>
>> >> e.g. for 4.1.0:
>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>> >>
>> >> Cheers
>> >>
>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>> wrote:
>> >>
>> >> > @James: Could you point me to a place where i can find tar file of
>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>> broken:
>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>> >> >
>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > I have tried to disable the table but since none of the RS are
>> coming
>> >> > > up.
>> >> > > I am unable to do it. Am i missing something?
>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>> >> > > like
>> >> > my
>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>> >> > > cluster
>> >> > > to be UP. I just want my cluster to come and then i will disable
>> the
>> >> > table
>> >> > > that has a Phoenix view.
>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>> >> > > HDP2.1.5.
>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>> What
>> >> > > is
>> >> > > the next alternative?
>> >> > >
>> >> > >
>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>> >> > > wrote:
>> >> > >
>> >> > >> Hi Anil,
>> >> > >>
>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>> shipped,
>> >> > >> or
>> >> > >> trying out a newer version? As James says, the upgrade must be
>> >> > >> servers
>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>> >> > >> their
>> >> > >> underlying HBase version.
>> >> > >>
>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>> >> > >> shell,
>> >> > >> removing the phoenix coprocessor. I've tried this in the past with
>> >> > >> other
>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>> table,
>> >> > alter
>> >> > >> table, enable table. There's still sharp edges around
>> >> > >> coprocessor-based
>> >> > >> deployment.
>> >> > >>
>> >> > >> Keep us posted, and sorry for the mess.
>> >> > >>
>> >> > >> -n
>> >> > >>
>> >> > >> [0]:
>> >> > >>
>> >> >
>> >> >
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>> >> > >>
>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <anilgupta84@gmail.com
>> >
>> >> > wrote:
>> >> > >>
>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>> >> > running
>> >> > >>> the latest version of HBase. This property was introduced
>> recently:
>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> >> > >>> Thanks, Vladimir.
>> >> > >>>
>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>> >> > >>> vladrodionov@gmail.com> wrote:
>> >> > >>>
>> >> > >>>> Try the following:
>> >> > >>>>
>> >> > >>>> Update hbase-site.xml config, set
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.enabed=false
>> >> > >>>>
>> >> > >>>> or:
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.user.enabed=false
>> >> > >>>>
>> >> > >>>> sync config across cluster.
>> >> > >>>>
>> >> > >>>> restart the cluster
>> >> > >>>>
>> >> > >>>> than update your table's settings in hbase shell
>> >> > >>>>
>> >> > >>>> -Vlad
>> >> > >>>>
>> >> > >>>>
>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>> anilgupta84@gmail.com>
>> >> > >>>> wrote:
>> >> > >>>>
>> >> > >>>>> Hi All,
>> >> > >>>>>
>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>> >> > >>>>> running
>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>> >> > >>>>> "Phoenix4-0.0-incubating".
>> >> > >>>>> I tried to create a view on existing table and then my entire
>> >> > >>>>> cluster
>> >> > >>>>> went down(all the RS went down. MAster is still up).
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> This is the exception i am seeing:
>> >> > >>>>>
>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>> [RS_OPEN_REGION-hdpslave8:60020-2]
>> >> > regionserver.HRegionServer: ABORTING region server
>> >> > bigdatabox.com,60020,1423589420136:
>> >> > The coprocessor
>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > threw an unexpected exception
>> >> > >>>>> java.io.IOException: No jar path specified for
>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>> >> > >>>>>         at
>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>> Source)
>> >> > >>>>>         at
>> >> >
>> >> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >> > >>>>>         at
>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>> >> > >>>>>         at
>> >> >
>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>> >> > >>>>> stucks
>> >> > at this point looking for
>> >> > >>>>>
>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>> cant do
>> >> > anything in the cluster until we fix it.
>> >> > >>>>>
>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>> >> > situation.
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> --
>> >> > >>>>> Thanks & Regards,
>> >> > >>>>> Anil Gupta
>> >> > >>>>>
>> >> > >>>>
>> >> > >>>>
>> >> > >>>
>> >> > >>>
>> >> > >>> --
>> >> > >>> Thanks & Regards,
>> >> > >>> Anil Gupta
>> >> > >>>
>> >> > >>
>> >> > >>
>> >> > >
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Cheers,
Chandu.

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Update: I checked out 4.0.1 branch from git and local build is underway.

On Fri, Mar 6, 2015 at 12:50 PM, anil gupta <an...@gmail.com> wrote:

> Hi James/Mujtaba,
>
> I am giving a tech talk of HBase on Monday morning. I wanted to demo
> Phoenix as part of that. Installation of 4.0.0 jars can only be done in
> office hours because i am dependent on other team to do it. If i can get
> the jar in 1-2 hours. I would really appreciate it.
>
> Thanks,
> Anil Gupta
>
>
> On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
> wrote:
>
>> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>>
>> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
>> > Hi Ted,
>> >
>> > In morning today, I downloaded 4.1 from the link you provided. The
>> problem
>> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
>> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
>> 4.0)
>> > as my client.
>> > IMO, we should also have 4.0.0-incubating artifacts since its the
>> compatible
>> > version with HDP2.1.5(6 month old release of HDP)
>> >
>> > Thanks,
>> > Anil Gupta
>> >
>> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>> >>
>> >> Ani:
>> >> You can find Phoenix release artifacts here:
>> >> http://archive.apache.org/dist/phoenix/
>> >>
>> >> e.g. for 4.1.0:
>> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>> >>
>> >> Cheers
>> >>
>> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
>> wrote:
>> >>
>> >> > @James: Could you point me to a place where i can find tar file of
>> >> > Phoenix-4.0.0-incubating release? All the links on this page are
>> broken:
>> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>> >> >
>> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > I have tried to disable the table but since none of the RS are
>> coming
>> >> > > up.
>> >> > > I am unable to do it. Am i missing something?
>> >> > > On the server side, we were using the "4.0.0-incubating". It seems
>> >> > > like
>> >> > my
>> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>> >> > > cluster
>> >> > > to be UP. I just want my cluster to come and then i will disable
>> the
>> >> > table
>> >> > > that has a Phoenix view.
>> >> > > What would be the possible side effects of using Phoenix 4.1 with
>> >> > > HDP2.1.5.
>> >> > > Even after updating to Phoenix4.1, if the problem is not fixed.
>> What
>> >> > > is
>> >> > > the next alternative?
>> >> > >
>> >> > >
>> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>> >> > > wrote:
>> >> > >
>> >> > >> Hi Anil,
>> >> > >>
>> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
>> shipped,
>> >> > >> or
>> >> > >> trying out a newer version? As James says, the upgrade must be
>> >> > >> servers
>> >> > >> first, then client. Also, Phoenix versions tend to be picky about
>> >> > >> their
>> >> > >> underlying HBase version.
>> >> > >>
>> >> > >> You can also try altering the now-broken phoenix tables via HBase
>> >> > >> shell,
>> >> > >> removing the phoenix coprocessor. I've tried this in the past with
>> >> > >> other
>> >> > >> coprocessor-loading woes and had mixed results. Try: disable
>> table,
>> >> > alter
>> >> > >> table, enable table. There's still sharp edges around
>> >> > >> coprocessor-based
>> >> > >> deployment.
>> >> > >>
>> >> > >> Keep us posted, and sorry for the mess.
>> >> > >>
>> >> > >> -n
>> >> > >>
>> >> > >> [0]:
>> >> > >>
>> >> >
>> >> >
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>> >> > >>
>> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <anilgupta84@gmail.com
>> >
>> >> > wrote:
>> >> > >>
>> >> > >>> Unfortunately, we ran out of luck on this one because we are not
>> >> > running
>> >> > >>> the latest version of HBase. This property was introduced
>> recently:
>> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> >> > >>> Thanks, Vladimir.
>> >> > >>>
>> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>> >> > >>> vladrodionov@gmail.com> wrote:
>> >> > >>>
>> >> > >>>> Try the following:
>> >> > >>>>
>> >> > >>>> Update hbase-site.xml config, set
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.enabed=false
>> >> > >>>>
>> >> > >>>> or:
>> >> > >>>>
>> >> > >>>> hbase.coprocessor.user.enabed=false
>> >> > >>>>
>> >> > >>>> sync config across cluster.
>> >> > >>>>
>> >> > >>>> restart the cluster
>> >> > >>>>
>> >> > >>>> than update your table's settings in hbase shell
>> >> > >>>>
>> >> > >>>> -Vlad
>> >> > >>>>
>> >> > >>>>
>> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
>> anilgupta84@gmail.com>
>> >> > >>>> wrote:
>> >> > >>>>
>> >> > >>>>> Hi All,
>> >> > >>>>>
>> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>> >> > >>>>> running
>> >> > >>>>> Phoenix4.1 client because i could not find tar file for
>> >> > >>>>> "Phoenix4-0.0-incubating".
>> >> > >>>>> I tried to create a view on existing table and then my entire
>> >> > >>>>> cluster
>> >> > >>>>> went down(all the RS went down. MAster is still up).
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> This is the exception i am seeing:
>> >> > >>>>>
>> >> > >>>>> 2015-03-05 14:30:53,296 FATAL
>> [RS_OPEN_REGION-hdpslave8:60020-2]
>> >> > regionserver.HRegionServer: ABORTING region server
>> >> > bigdatabox.com,60020,1423589420136:
>> >> > The coprocessor
>> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > threw an unexpected exception
>> >> > >>>>> java.io.IOException: No jar path specified for
>> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>> >> > >>>>>         at
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>> >> > >>>>>         at
>> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
>> Source)
>> >> > >>>>>         at
>> >> >
>> >> >
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> >> > >>>>>         at
>> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>> >> > >>>>>         at
>> >> >
>> >> >
>> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>> >> > >>>>>         at
>> >> >
>> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >> > >>>>>         at
>> >> >
>> >> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
>> >> > >>>>> stucks
>> >> > at this point looking for
>> >> > >>>>>
>> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We
>> cant do
>> >> > anything in the cluster until we fix it.
>> >> > >>>>>
>> >> > >>>>> I was thinking of disabling those tables but none of the RS is
>> >> > coming up. Can anyone suggest me how can i bail out of this BAD
>> >> > situation.
>> >> > >>>>>
>> >> > >>>>>
>> >> > >>>>> --
>> >> > >>>>> Thanks & Regards,
>> >> > >>>>> Anil Gupta
>> >> > >>>>>
>> >> > >>>>
>> >> > >>>>
>> >> > >>>
>> >> > >>>
>> >> > >>> --
>> >> > >>> Thanks & Regards,
>> >> > >>> Anil Gupta
>> >> > >>>
>> >> > >>
>> >> > >>
>> >> > >
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Hi James/Mujtaba,

I am giving a tech talk of HBase on Monday morning. I wanted to demo
Phoenix as part of that. Installation of 4.0.0 jars can only be done in
office hours because i am dependent on other team to do it. If i can get
the jar in 1-2 hours. I would really appreciate it.

Thanks,
Anil Gupta


On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
wrote:

> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>
> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
> > Hi Ted,
> >
> > In morning today, I downloaded 4.1 from the link you provided. The
> problem
> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
> 4.0)
> > as my client.
> > IMO, we should also have 4.0.0-incubating artifacts since its the
> compatible
> > version with HDP2.1.5(6 month old release of HDP)
> >
> > Thanks,
> > Anil Gupta
> >
> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
> >>
> >> Ani:
> >> You can find Phoenix release artifacts here:
> >> http://archive.apache.org/dist/phoenix/
> >>
> >> e.g. for 4.1.0:
> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
> >>
> >> Cheers
> >>
> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >> > @James: Could you point me to a place where i can find tar file of
> >> > Phoenix-4.0.0-incubating release? All the links on this page are
> broken:
> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
> >> >
> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> >
> >> > > I have tried to disable the table but since none of the RS are
> coming
> >> > > up.
> >> > > I am unable to do it. Am i missing something?
> >> > > On the server side, we were using the "4.0.0-incubating". It seems
> >> > > like
> >> > my
> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
> >> > > cluster
> >> > > to be UP. I just want my cluster to come and then i will disable the
> >> > table
> >> > > that has a Phoenix view.
> >> > > What would be the possible side effects of using Phoenix 4.1 with
> >> > > HDP2.1.5.
> >> > > Even after updating to Phoenix4.1, if the problem is not fixed. What
> >> > > is
> >> > > the next alternative?
> >> > >
> >> > >
> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
> >> > > wrote:
> >> > >
> >> > >> Hi Anil,
> >> > >>
> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
> shipped,
> >> > >> or
> >> > >> trying out a newer version? As James says, the upgrade must be
> >> > >> servers
> >> > >> first, then client. Also, Phoenix versions tend to be picky about
> >> > >> their
> >> > >> underlying HBase version.
> >> > >>
> >> > >> You can also try altering the now-broken phoenix tables via HBase
> >> > >> shell,
> >> > >> removing the phoenix coprocessor. I've tried this in the past with
> >> > >> other
> >> > >> coprocessor-loading woes and had mixed results. Try: disable table,
> >> > alter
> >> > >> table, enable table. There's still sharp edges around
> >> > >> coprocessor-based
> >> > >> deployment.
> >> > >>
> >> > >> Keep us posted, and sorry for the mess.
> >> > >>
> >> > >> -n
> >> > >>
> >> > >> [0]:
> >> > >>
> >> >
> >> >
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >> > >>
> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> > >>
> >> > >>> Unfortunately, we ran out of luck on this one because we are not
> >> > running
> >> > >>> the latest version of HBase. This property was introduced
> recently:
> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >> > >>> Thanks, Vladimir.
> >> > >>>
> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >> > >>> vladrodionov@gmail.com> wrote:
> >> > >>>
> >> > >>>> Try the following:
> >> > >>>>
> >> > >>>> Update hbase-site.xml config, set
> >> > >>>>
> >> > >>>> hbase.coprocessor.enabed=false
> >> > >>>>
> >> > >>>> or:
> >> > >>>>
> >> > >>>> hbase.coprocessor.user.enabed=false
> >> > >>>>
> >> > >>>> sync config across cluster.
> >> > >>>>
> >> > >>>> restart the cluster
> >> > >>>>
> >> > >>>> than update your table's settings in hbase shell
> >> > >>>>
> >> > >>>> -Vlad
> >> > >>>>
> >> > >>>>
> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
> anilgupta84@gmail.com>
> >> > >>>> wrote:
> >> > >>>>
> >> > >>>>> Hi All,
> >> > >>>>>
> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
> >> > >>>>> running
> >> > >>>>> Phoenix4.1 client because i could not find tar file for
> >> > >>>>> "Phoenix4-0.0-incubating".
> >> > >>>>> I tried to create a view on existing table and then my entire
> >> > >>>>> cluster
> >> > >>>>> went down(all the RS went down. MAster is still up).
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> This is the exception i am seeing:
> >> > >>>>>
> >> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> >> > regionserver.HRegionServer: ABORTING region server
> >> > bigdatabox.com,60020,1423589420136:
> >> > The coprocessor
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >> > threw an unexpected exception
> >> > >>>>> java.io.IOException: No jar path specified for
> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >> > >>>>>         at
> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >> > >>>>>         at
> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >> > >>>>>         at
> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >> > >>>>>         at
> >> >
> >> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> > >>>>>         at
> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >> > >>>>>         at
> >> >
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >> > >>>>>         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> > >>>>>         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
> >> > >>>>> stucks
> >> > at this point looking for
> >> > >>>>>
> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant
> do
> >> > anything in the cluster until we fix it.
> >> > >>>>>
> >> > >>>>> I was thinking of disabling those tables but none of the RS is
> >> > coming up. Can anyone suggest me how can i bail out of this BAD
> >> > situation.
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> --
> >> > >>>>> Thanks & Regards,
> >> > >>>>> Anil Gupta
> >> > >>>>>
> >> > >>>>
> >> > >>>>
> >> > >>>
> >> > >>>
> >> > >>> --
> >> > >>> Thanks & Regards,
> >> > >>> Anil Gupta
> >> > >>>
> >> > >>
> >> > >>
> >> > >
> >> > >
> >> > > --
> >> > > Thanks & Regards,
> >> > > Anil Gupta
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Anil Gupta
> >> >
> >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Hi James/Mujtaba,

I am giving a tech talk of HBase on Monday morning. I wanted to demo
Phoenix as part of that. Installation of 4.0.0 jars can only be done in
office hours because i am dependent on other team to do it. If i can get
the jar in 1-2 hours. I would really appreciate it.

Thanks,
Anil Gupta


On Thu, Mar 5, 2015 at 10:10 PM, James Taylor <ja...@apache.org>
wrote:

> Mujtaba - do you know where our 4.0.0-incubating artifacts are?
>
> On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
> > Hi Ted,
> >
> > In morning today, I downloaded 4.1 from the link you provided. The
> problem
> > is that i was unable to find 4.0.0-incubating release artifacts. So, i
> > thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
> 4.0)
> > as my client.
> > IMO, we should also have 4.0.0-incubating artifacts since its the
> compatible
> > version with HDP2.1.5(6 month old release of HDP)
> >
> > Thanks,
> > Anil Gupta
> >
> > On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
> >>
> >> Ani:
> >> You can find Phoenix release artifacts here:
> >> http://archive.apache.org/dist/phoenix/
> >>
> >> e.g. for 4.1.0:
> >> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
> >>
> >> Cheers
> >>
> >> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >> > @James: Could you point me to a place where i can find tar file of
> >> > Phoenix-4.0.0-incubating release? All the links on this page are
> broken:
> >> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
> >> >
> >> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> >
> >> > > I have tried to disable the table but since none of the RS are
> coming
> >> > > up.
> >> > > I am unable to do it. Am i missing something?
> >> > > On the server side, we were using the "4.0.0-incubating". It seems
> >> > > like
> >> > my
> >> > > only option is to upgrade the server to 4.1.  At-least, the HBase
> >> > > cluster
> >> > > to be UP. I just want my cluster to come and then i will disable the
> >> > table
> >> > > that has a Phoenix view.
> >> > > What would be the possible side effects of using Phoenix 4.1 with
> >> > > HDP2.1.5.
> >> > > Even after updating to Phoenix4.1, if the problem is not fixed. What
> >> > > is
> >> > > the next alternative?
> >> > >
> >> > >
> >> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
> >> > > wrote:
> >> > >
> >> > >> Hi Anil,
> >> > >>
> >> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version
> shipped,
> >> > >> or
> >> > >> trying out a newer version? As James says, the upgrade must be
> >> > >> servers
> >> > >> first, then client. Also, Phoenix versions tend to be picky about
> >> > >> their
> >> > >> underlying HBase version.
> >> > >>
> >> > >> You can also try altering the now-broken phoenix tables via HBase
> >> > >> shell,
> >> > >> removing the phoenix coprocessor. I've tried this in the past with
> >> > >> other
> >> > >> coprocessor-loading woes and had mixed results. Try: disable table,
> >> > alter
> >> > >> table, enable table. There's still sharp edges around
> >> > >> coprocessor-based
> >> > >> deployment.
> >> > >>
> >> > >> Keep us posted, and sorry for the mess.
> >> > >>
> >> > >> -n
> >> > >>
> >> > >> [0]:
> >> > >>
> >> >
> >> >
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >> > >>
> >> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> > >>
> >> > >>> Unfortunately, we ran out of luck on this one because we are not
> >> > running
> >> > >>> the latest version of HBase. This property was introduced
> recently:
> >> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >> > >>> Thanks, Vladimir.
> >> > >>>
> >> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >> > >>> vladrodionov@gmail.com> wrote:
> >> > >>>
> >> > >>>> Try the following:
> >> > >>>>
> >> > >>>> Update hbase-site.xml config, set
> >> > >>>>
> >> > >>>> hbase.coprocessor.enabed=false
> >> > >>>>
> >> > >>>> or:
> >> > >>>>
> >> > >>>> hbase.coprocessor.user.enabed=false
> >> > >>>>
> >> > >>>> sync config across cluster.
> >> > >>>>
> >> > >>>> restart the cluster
> >> > >>>>
> >> > >>>> than update your table's settings in hbase shell
> >> > >>>>
> >> > >>>> -Vlad
> >> > >>>>
> >> > >>>>
> >> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <
> anilgupta84@gmail.com>
> >> > >>>> wrote:
> >> > >>>>
> >> > >>>>> Hi All,
> >> > >>>>>
> >> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
> >> > >>>>> running
> >> > >>>>> Phoenix4.1 client because i could not find tar file for
> >> > >>>>> "Phoenix4-0.0-incubating".
> >> > >>>>> I tried to create a view on existing table and then my entire
> >> > >>>>> cluster
> >> > >>>>> went down(all the RS went down. MAster is still up).
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> This is the exception i am seeing:
> >> > >>>>>
> >> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> >> > regionserver.HRegionServer: ABORTING region server
> >> > bigdatabox.com,60020,1423589420136:
> >> > The coprocessor
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >> > threw an unexpected exception
> >> > >>>>> java.io.IOException: No jar path specified for
> >> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >> > >>>>>         at
> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >> > >>>>>         at
> >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >> > >>>>>         at
> >> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >> > >>>>>         at
> >> >
> >> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> > >>>>>         at
> >> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >> > >>>>>         at
> >> >
> >> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >> > >>>>>         at
> >> >
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >> > >>>>>         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> > >>>>>         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> > >>>>>         at java.lang.Thread.run(Thread.java:744)
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> We tried to restart the cluster. It died again. It seems, its
> >> > >>>>> stucks
> >> > at this point looking for
> >> > >>>>>
> >> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant
> do
> >> > anything in the cluster until we fix it.
> >> > >>>>>
> >> > >>>>> I was thinking of disabling those tables but none of the RS is
> >> > coming up. Can anyone suggest me how can i bail out of this BAD
> >> > situation.
> >> > >>>>>
> >> > >>>>>
> >> > >>>>> --
> >> > >>>>> Thanks & Regards,
> >> > >>>>> Anil Gupta
> >> > >>>>>
> >> > >>>>
> >> > >>>>
> >> > >>>
> >> > >>>
> >> > >>> --
> >> > >>> Thanks & Regards,
> >> > >>> Anil Gupta
> >> > >>>
> >> > >>
> >> > >>
> >> > >
> >> > >
> >> > > --
> >> > > Thanks & Regards,
> >> > > Anil Gupta
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Anil Gupta
> >> >
> >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by James Taylor <ja...@apache.org>.
Mujtaba - do you know where our 4.0.0-incubating artifacts are?

On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
> Hi Ted,
>
> In morning today, I downloaded 4.1 from the link you provided. The problem
> is that i was unable to find 4.0.0-incubating release artifacts. So, i
> thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to 4.0)
> as my client.
> IMO, we should also have 4.0.0-incubating artifacts since its the compatible
> version with HDP2.1.5(6 month old release of HDP)
>
> Thanks,
> Anil Gupta
>
> On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>
>> Ani:
>> You can find Phoenix release artifacts here:
>> http://archive.apache.org/dist/phoenix/
>>
>> e.g. for 4.1.0:
>> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>
>> Cheers
>>
>> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:
>>
>> > @James: Could you point me to a place where i can find tar file of
>> > Phoenix-4.0.0-incubating release? All the links on this page are broken:
>> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>> >
>> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> >
>> > > I have tried to disable the table but since none of the RS are coming
>> > > up.
>> > > I am unable to do it. Am i missing something?
>> > > On the server side, we were using the "4.0.0-incubating". It seems
>> > > like
>> > my
>> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>> > > cluster
>> > > to be UP. I just want my cluster to come and then i will disable the
>> > table
>> > > that has a Phoenix view.
>> > > What would be the possible side effects of using Phoenix 4.1 with
>> > > HDP2.1.5.
>> > > Even after updating to Phoenix4.1, if the problem is not fixed. What
>> > > is
>> > > the next alternative?
>> > >
>> > >
>> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>> > > wrote:
>> > >
>> > >> Hi Anil,
>> > >>
>> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped,
>> > >> or
>> > >> trying out a newer version? As James says, the upgrade must be
>> > >> servers
>> > >> first, then client. Also, Phoenix versions tend to be picky about
>> > >> their
>> > >> underlying HBase version.
>> > >>
>> > >> You can also try altering the now-broken phoenix tables via HBase
>> > >> shell,
>> > >> removing the phoenix coprocessor. I've tried this in the past with
>> > >> other
>> > >> coprocessor-loading woes and had mixed results. Try: disable table,
>> > alter
>> > >> table, enable table. There's still sharp edges around
>> > >> coprocessor-based
>> > >> deployment.
>> > >>
>> > >> Keep us posted, and sorry for the mess.
>> > >>
>> > >> -n
>> > >>
>> > >> [0]:
>> > >>
>> >
>> > http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>> > >>
>> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> > >>
>> > >>> Unfortunately, we ran out of luck on this one because we are not
>> > running
>> > >>> the latest version of HBase. This property was introduced recently:
>> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> > >>> Thanks, Vladimir.
>> > >>>
>> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>> > >>> vladrodionov@gmail.com> wrote:
>> > >>>
>> > >>>> Try the following:
>> > >>>>
>> > >>>> Update hbase-site.xml config, set
>> > >>>>
>> > >>>> hbase.coprocessor.enabed=false
>> > >>>>
>> > >>>> or:
>> > >>>>
>> > >>>> hbase.coprocessor.user.enabed=false
>> > >>>>
>> > >>>> sync config across cluster.
>> > >>>>
>> > >>>> restart the cluster
>> > >>>>
>> > >>>> than update your table's settings in hbase shell
>> > >>>>
>> > >>>> -Vlad
>> > >>>>
>> > >>>>
>> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>> > >>>> wrote:
>> > >>>>
>> > >>>>> Hi All,
>> > >>>>>
>> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>> > >>>>> running
>> > >>>>> Phoenix4.1 client because i could not find tar file for
>> > >>>>> "Phoenix4-0.0-incubating".
>> > >>>>> I tried to create a view on existing table and then my entire
>> > >>>>> cluster
>> > >>>>> went down(all the RS went down. MAster is still up).
>> > >>>>>
>> > >>>>>
>> > >>>>> This is the exception i am seeing:
>> > >>>>>
>> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
>> > regionserver.HRegionServer: ABORTING region server
>> > bigdatabox.com,60020,1423589420136:
>> > The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> > threw an unexpected exception
>> > >>>>> java.io.IOException: No jar path specified for
>> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>> > >>>>>         at
>> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>> > >>>>>         at
>> >
>> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > >>>>>         at
>> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>> > >>>>>         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> > >>>>>         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>> > >>>>>
>> > >>>>>
>> > >>>>> We tried to restart the cluster. It died again. It seems, its
>> > >>>>> stucks
>> > at this point looking for
>> > >>>>>
>> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
>> > anything in the cluster until we fix it.
>> > >>>>>
>> > >>>>> I was thinking of disabling those tables but none of the RS is
>> > coming up. Can anyone suggest me how can i bail out of this BAD
>> > situation.
>> > >>>>>
>> > >>>>>
>> > >>>>> --
>> > >>>>> Thanks & Regards,
>> > >>>>> Anil Gupta
>> > >>>>>
>> > >>>>
>> > >>>>
>> > >>>
>> > >>>
>> > >>> --
>> > >>> Thanks & Regards,
>> > >>> Anil Gupta
>> > >>>
>> > >>
>> > >>
>> > >
>> > >
>> > > --
>> > > Thanks & Regards,
>> > > Anil Gupta
>> > >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by James Taylor <ja...@apache.org>.
Mujtaba - do you know where our 4.0.0-incubating artifacts are?

On Thu, Mar 5, 2015 at 9:58 PM, anil gupta <an...@gmail.com> wrote:
> Hi Ted,
>
> In morning today, I downloaded 4.1 from the link you provided. The problem
> is that i was unable to find 4.0.0-incubating release artifacts. So, i
> thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to 4.0)
> as my client.
> IMO, we should also have 4.0.0-incubating artifacts since its the compatible
> version with HDP2.1.5(6 month old release of HDP)
>
> Thanks,
> Anil Gupta
>
> On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:
>>
>> Ani:
>> You can find Phoenix release artifacts here:
>> http://archive.apache.org/dist/phoenix/
>>
>> e.g. for 4.1.0:
>> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>>
>> Cheers
>>
>> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:
>>
>> > @James: Could you point me to a place where i can find tar file of
>> > Phoenix-4.0.0-incubating release? All the links on this page are broken:
>> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>> >
>> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> >
>> > > I have tried to disable the table but since none of the RS are coming
>> > > up.
>> > > I am unable to do it. Am i missing something?
>> > > On the server side, we were using the "4.0.0-incubating". It seems
>> > > like
>> > my
>> > > only option is to upgrade the server to 4.1.  At-least, the HBase
>> > > cluster
>> > > to be UP. I just want my cluster to come and then i will disable the
>> > table
>> > > that has a Phoenix view.
>> > > What would be the possible side effects of using Phoenix 4.1 with
>> > > HDP2.1.5.
>> > > Even after updating to Phoenix4.1, if the problem is not fixed. What
>> > > is
>> > > the next alternative?
>> > >
>> > >
>> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
>> > > wrote:
>> > >
>> > >> Hi Anil,
>> > >>
>> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped,
>> > >> or
>> > >> trying out a newer version? As James says, the upgrade must be
>> > >> servers
>> > >> first, then client. Also, Phoenix versions tend to be picky about
>> > >> their
>> > >> underlying HBase version.
>> > >>
>> > >> You can also try altering the now-broken phoenix tables via HBase
>> > >> shell,
>> > >> removing the phoenix coprocessor. I've tried this in the past with
>> > >> other
>> > >> coprocessor-loading woes and had mixed results. Try: disable table,
>> > alter
>> > >> table, enable table. There's still sharp edges around
>> > >> coprocessor-based
>> > >> deployment.
>> > >>
>> > >> Keep us posted, and sorry for the mess.
>> > >>
>> > >> -n
>> > >>
>> > >> [0]:
>> > >>
>> >
>> > http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>> > >>
>> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> > >>
>> > >>> Unfortunately, we ran out of luck on this one because we are not
>> > running
>> > >>> the latest version of HBase. This property was introduced recently:
>> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> > >>> Thanks, Vladimir.
>> > >>>
>> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>> > >>> vladrodionov@gmail.com> wrote:
>> > >>>
>> > >>>> Try the following:
>> > >>>>
>> > >>>> Update hbase-site.xml config, set
>> > >>>>
>> > >>>> hbase.coprocessor.enabed=false
>> > >>>>
>> > >>>> or:
>> > >>>>
>> > >>>> hbase.coprocessor.user.enabed=false
>> > >>>>
>> > >>>> sync config across cluster.
>> > >>>>
>> > >>>> restart the cluster
>> > >>>>
>> > >>>> than update your table's settings in hbase shell
>> > >>>>
>> > >>>> -Vlad
>> > >>>>
>> > >>>>
>> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>> > >>>> wrote:
>> > >>>>
>> > >>>>> Hi All,
>> > >>>>>
>> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
>> > >>>>> running
>> > >>>>> Phoenix4.1 client because i could not find tar file for
>> > >>>>> "Phoenix4-0.0-incubating".
>> > >>>>> I tried to create a view on existing table and then my entire
>> > >>>>> cluster
>> > >>>>> went down(all the RS went down. MAster is still up).
>> > >>>>>
>> > >>>>>
>> > >>>>> This is the exception i am seeing:
>> > >>>>>
>> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
>> > regionserver.HRegionServer: ABORTING region server
>> > bigdatabox.com,60020,1423589420136:
>> > The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> > threw an unexpected exception
>> > >>>>> java.io.IOException: No jar path specified for
>> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>> > >>>>>         at
>> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>> > >>>>>         at
>> >
>> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > >>>>>         at
>> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>> > >>>>>         at
>> >
>> > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>> > >>>>>         at
>> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>> > >>>>>         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> > >>>>>         at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> > >>>>>         at java.lang.Thread.run(Thread.java:744)
>> > >>>>>
>> > >>>>>
>> > >>>>> We tried to restart the cluster. It died again. It seems, its
>> > >>>>> stucks
>> > at this point looking for
>> > >>>>>
>> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
>> > anything in the cluster until we fix it.
>> > >>>>>
>> > >>>>> I was thinking of disabling those tables but none of the RS is
>> > coming up. Can anyone suggest me how can i bail out of this BAD
>> > situation.
>> > >>>>>
>> > >>>>>
>> > >>>>> --
>> > >>>>> Thanks & Regards,
>> > >>>>> Anil Gupta
>> > >>>>>
>> > >>>>
>> > >>>>
>> > >>>
>> > >>>
>> > >>> --
>> > >>> Thanks & Regards,
>> > >>> Anil Gupta
>> > >>>
>> > >>
>> > >>
>> > >
>> > >
>> > > --
>> > > Thanks & Regards,
>> > > Anil Gupta
>> > >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Hi Ted,

In morning today, I downloaded 4.1 from the link you provided. The problem
is that i was unable to find 4.0.0-incubating release artifacts. So, i
thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
4.0) as my client.
IMO, we should also have 4.0.0-incubating artifacts since its the
compatible version with HDP2.1.5(6 month old release of HDP)

Thanks,
Anil Gupta

On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:

> Ani:
> You can find Phoenix release artifacts here:
> http://archive.apache.org/dist/phoenix/
>
> e.g. for 4.1.0:
> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>
> Cheers
>
> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:
>
> > @James: Could you point me to a place where i can find tar file of
> > Phoenix-4.0.0-incubating release? All the links on this page are broken:
> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
> >
> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
> wrote:
> >
> > > I have tried to disable the table but since none of the RS are coming
> up.
> > > I am unable to do it. Am i missing something?
> > > On the server side, we were using the "4.0.0-incubating". It seems like
> > my
> > > only option is to upgrade the server to 4.1.  At-least, the HBase
> cluster
> > > to be UP. I just want my cluster to come and then i will disable the
> > table
> > > that has a Phoenix view.
> > > What would be the possible side effects of using Phoenix 4.1 with
> > > HDP2.1.5.
> > > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > > the next alternative?
> > >
> > >
> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
> wrote:
> > >
> > >> Hi Anil,
> > >>
> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped,
> or
> > >> trying out a newer version? As James says, the upgrade must be servers
> > >> first, then client. Also, Phoenix versions tend to be picky about
> their
> > >> underlying HBase version.
> > >>
> > >> You can also try altering the now-broken phoenix tables via HBase
> shell,
> > >> removing the phoenix coprocessor. I've tried this in the past with
> other
> > >> coprocessor-loading woes and had mixed results. Try: disable table,
> > alter
> > >> table, enable table. There's still sharp edges around
> coprocessor-based
> > >> deployment.
> > >>
> > >> Keep us posted, and sorry for the mess.
> > >>
> > >> -n
> > >>
> > >> [0]:
> > >>
> >
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> > >>
> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> > wrote:
> > >>
> > >>> Unfortunately, we ran out of luck on this one because we are not
> > running
> > >>> the latest version of HBase. This property was introduced recently:
> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> > >>> Thanks, Vladimir.
> > >>>
> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> > >>> vladrodionov@gmail.com> wrote:
> > >>>
> > >>>> Try the following:
> > >>>>
> > >>>> Update hbase-site.xml config, set
> > >>>>
> > >>>> hbase.coprocessor.enabed=false
> > >>>>
> > >>>> or:
> > >>>>
> > >>>> hbase.coprocessor.user.enabed=false
> > >>>>
> > >>>> sync config across cluster.
> > >>>>
> > >>>> restart the cluster
> > >>>>
> > >>>> than update your table's settings in hbase shell
> > >>>>
> > >>>> -Vlad
> > >>>>
> > >>>>
> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> > >>>> wrote:
> > >>>>
> > >>>>> Hi All,
> > >>>>>
> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
> running
> > >>>>> Phoenix4.1 client because i could not find tar file for
> > >>>>> "Phoenix4-0.0-incubating".
> > >>>>> I tried to create a view on existing table and then my entire
> cluster
> > >>>>> went down(all the RS went down. MAster is still up).
> > >>>>>
> > >>>>>
> > >>>>> This is the exception i am seeing:
> > >>>>>
> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> > regionserver.HRegionServer: ABORTING region server bigdatabox.com
> ,60020,1423589420136:
> > The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> > threw an unexpected exception
> > >>>>> java.io.IOException: No jar path specified for
> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> > >>>>>         at
> >
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> > >>>>>         at
> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> > >>>>>         at
> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> > >>>>>         at
> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> > >>>>>         at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >>>>>         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> > >>>>>         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> > >>>>>         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >>>>>         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >>>>>         at java.lang.Thread.run(Thread.java:744)
> > >>>>>
> > >>>>>
> > >>>>> We tried to restart the cluster. It died again. It seems, its
> stucks
> > at this point looking for
> > >>>>>
> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> > anything in the cluster until we fix it.
> > >>>>>
> > >>>>> I was thinking of disabling those tables but none of the RS is
> > coming up. Can anyone suggest me how can i bail out of this BAD
> situation.
> > >>>>>
> > >>>>>
> > >>>>> --
> > >>>>> Thanks & Regards,
> > >>>>> Anil Gupta
> > >>>>>
> > >>>>
> > >>>>
> > >>>
> > >>>
> > >>> --
> > >>> Thanks & Regards,
> > >>> Anil Gupta
> > >>>
> > >>
> > >>
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Gupta
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Hi Ted,

In morning today, I downloaded 4.1 from the link you provided. The problem
is that i was unable to find 4.0.0-incubating release artifacts. So, i
thought to use 4.1(thinking 4.1 will be a minor & compatible upgrade to
4.0) as my client.
IMO, we should also have 4.0.0-incubating artifacts since its the
compatible version with HDP2.1.5(6 month old release of HDP)

Thanks,
Anil Gupta

On Thu, Mar 5, 2015 at 9:17 PM, Ted Yu <yu...@gmail.com> wrote:

> Ani:
> You can find Phoenix release artifacts here:
> http://archive.apache.org/dist/phoenix/
>
> e.g. for 4.1.0:
> http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/
>
> Cheers
>
> On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:
>
> > @James: Could you point me to a place where i can find tar file of
> > Phoenix-4.0.0-incubating release? All the links on this page are broken:
> > http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
> >
> > On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com>
> wrote:
> >
> > > I have tried to disable the table but since none of the RS are coming
> up.
> > > I am unable to do it. Am i missing something?
> > > On the server side, we were using the "4.0.0-incubating". It seems like
> > my
> > > only option is to upgrade the server to 4.1.  At-least, the HBase
> cluster
> > > to be UP. I just want my cluster to come and then i will disable the
> > table
> > > that has a Phoenix view.
> > > What would be the possible side effects of using Phoenix 4.1 with
> > > HDP2.1.5.
> > > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > > the next alternative?
> > >
> > >
> > > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com>
> wrote:
> > >
> > >> Hi Anil,
> > >>
> > >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped,
> or
> > >> trying out a newer version? As James says, the upgrade must be servers
> > >> first, then client. Also, Phoenix versions tend to be picky about
> their
> > >> underlying HBase version.
> > >>
> > >> You can also try altering the now-broken phoenix tables via HBase
> shell,
> > >> removing the phoenix coprocessor. I've tried this in the past with
> other
> > >> coprocessor-loading woes and had mixed results. Try: disable table,
> > alter
> > >> table, enable table. There's still sharp edges around
> coprocessor-based
> > >> deployment.
> > >>
> > >> Keep us posted, and sorry for the mess.
> > >>
> > >> -n
> > >>
> > >> [0]:
> > >>
> >
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> > >>
> > >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> > wrote:
> > >>
> > >>> Unfortunately, we ran out of luck on this one because we are not
> > running
> > >>> the latest version of HBase. This property was introduced recently:
> > >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> > >>> Thanks, Vladimir.
> > >>>
> > >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> > >>> vladrodionov@gmail.com> wrote:
> > >>>
> > >>>> Try the following:
> > >>>>
> > >>>> Update hbase-site.xml config, set
> > >>>>
> > >>>> hbase.coprocessor.enabed=false
> > >>>>
> > >>>> or:
> > >>>>
> > >>>> hbase.coprocessor.user.enabed=false
> > >>>>
> > >>>> sync config across cluster.
> > >>>>
> > >>>> restart the cluster
> > >>>>
> > >>>> than update your table's settings in hbase shell
> > >>>>
> > >>>> -Vlad
> > >>>>
> > >>>>
> > >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> > >>>> wrote:
> > >>>>
> > >>>>> Hi All,
> > >>>>>
> > >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was
> running
> > >>>>> Phoenix4.1 client because i could not find tar file for
> > >>>>> "Phoenix4-0.0-incubating".
> > >>>>> I tried to create a view on existing table and then my entire
> cluster
> > >>>>> went down(all the RS went down. MAster is still up).
> > >>>>>
> > >>>>>
> > >>>>> This is the exception i am seeing:
> > >>>>>
> > >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> > regionserver.HRegionServer: ABORTING region server bigdatabox.com
> ,60020,1423589420136:
> > The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> > threw an unexpected exception
> > >>>>> java.io.IOException: No jar path specified for
> > org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> > >>>>>         at
> >
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> > >>>>>         at
> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> > >>>>>         at
> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> > >>>>>         at
> > sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> > >>>>>         at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > >>>>>         at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> > >>>>>         at
> >
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> > >>>>>         at
> > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> > >>>>>         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > >>>>>         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >>>>>         at java.lang.Thread.run(Thread.java:744)
> > >>>>>
> > >>>>>
> > >>>>> We tried to restart the cluster. It died again. It seems, its
> stucks
> > at this point looking for
> > >>>>>
> > >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> > anything in the cluster until we fix it.
> > >>>>>
> > >>>>> I was thinking of disabling those tables but none of the RS is
> > coming up. Can anyone suggest me how can i bail out of this BAD
> situation.
> > >>>>>
> > >>>>>
> > >>>>> --
> > >>>>> Thanks & Regards,
> > >>>>> Anil Gupta
> > >>>>>
> > >>>>
> > >>>>
> > >>>
> > >>>
> > >>> --
> > >>> Thanks & Regards,
> > >>> Anil Gupta
> > >>>
> > >>
> > >>
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Gupta
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Ted Yu <yu...@gmail.com>.
Ani:
You can find Phoenix release artifacts here:
http://archive.apache.org/dist/phoenix/

e.g. for 4.1.0:
http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/

Cheers

On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:

> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> > I have tried to disable the table but since none of the RS are coming up.
> > I am unable to do it. Am i missing something?
> > On the server side, we were using the "4.0.0-incubating". It seems like
> my
> > only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> > to be UP. I just want my cluster to come and then i will disable the
> table
> > that has a Phoenix view.
> > What would be the possible side effects of using Phoenix 4.1 with
> > HDP2.1.5.
> > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > the next alternative?
> >
> >
> > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
> >
> >> Hi Anil,
> >>
> >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> >> trying out a newer version? As James says, the upgrade must be servers
> >> first, then client. Also, Phoenix versions tend to be picky about their
> >> underlying HBase version.
> >>
> >> You can also try altering the now-broken phoenix tables via HBase shell,
> >> removing the phoenix coprocessor. I've tried this in the past with other
> >> coprocessor-loading woes and had mixed results. Try: disable table,
> alter
> >> table, enable table. There's still sharp edges around coprocessor-based
> >> deployment.
> >>
> >> Keep us posted, and sorry for the mess.
> >>
> >> -n
> >>
> >> [0]:
> >>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >>
> >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >>> Unfortunately, we ran out of luck on this one because we are not
> running
> >>> the latest version of HBase. This property was introduced recently:
> >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >>> Thanks, Vladimir.
> >>>
> >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >>> vladrodionov@gmail.com> wrote:
> >>>
> >>>> Try the following:
> >>>>
> >>>> Update hbase-site.xml config, set
> >>>>
> >>>> hbase.coprocessor.enabed=false
> >>>>
> >>>> or:
> >>>>
> >>>> hbase.coprocessor.user.enabed=false
> >>>>
> >>>> sync config across cluster.
> >>>>
> >>>> restart the cluster
> >>>>
> >>>> than update your table's settings in hbase shell
> >>>>
> >>>> -Vlad
> >>>>
> >>>>
> >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Hi All,
> >>>>>
> >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> >>>>> Phoenix4.1 client because i could not find tar file for
> >>>>> "Phoenix4-0.0-incubating".
> >>>>> I tried to create a view on existing table and then my entire cluster
> >>>>> went down(all the RS went down. MAster is still up).
> >>>>>
> >>>>>
> >>>>> This is the exception i am seeing:
> >>>>>
> >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136:
> The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> threw an unexpected exception
> >>>>> java.io.IOException: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >>>>>         at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >>>>>         at
> sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >>>>>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>>>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >>>>>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>         at java.lang.Thread.run(Thread.java:744)
> >>>>>
> >>>>>
> >>>>> We tried to restart the cluster. It died again. It seems, its stucks
> at this point looking for
> >>>>>
> >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> anything in the cluster until we fix it.
> >>>>>
> >>>>> I was thinking of disabling those tables but none of the RS is
> coming up. Can anyone suggest me how can i bail out of this BAD situation.
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Thanks & Regards,
> >>>>> Anil Gupta
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Thanks & Regards,
> >>> Anil Gupta
> >>>
> >>
> >>
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
We copied Phoenix4.1 on our RS and restarted the cluster. It came UP.
Sigh! Copying jar fixed it!.. This was one of the most nasty situation i've
ever run into HBase.
After the restart,  i truncated 'SYSTEM.CATALOG' and "SYSTEM.SEQUENCE". I
also removed all the Phoenix Coprocessors from my table.

Thanks everyone for the prompt support. :)
I am going to try connecting 4.0 client with 4.0 server tomorrow and see
how it goes. Hopefully, i wont have any disaster.

~Anil Gupta

On Thu, Mar 5, 2015 at 6:38 PM, Jeffrey Zhong <je...@apache.org> wrote:

>
> Theoretically you can run Phoenix4.1 server jar on HDP2.1.5 but this
> combination isn't tested. So you should not try it in a production env.
>
>
> In order to workaround the coprocessor issue, you can try to rename the
> table folder in hdfs, then restart region servers so meta region will be
> assigned. You can then disable the problematic table, rename the hdfs table
> folder back and then alter table to remove the coprocessor.
>
>
> Another thing is that once you used phoenix4.1 client connect to a phoenix
> cluster, the cluster system table schema are upgraded to 4.1 version.  In
> 4.1 version, it seems to me there are only two new columns created in
> catalog table and should be all right to continue use 4.0 Phoenix client.
>
> ------------------------------
> Date: Fri, 6 Mar 2015 09:37:32 +0800
> From: sunfl@certusnet.com.cn
> To: user@phoenix.apache.org; jamestaylor@apache.org
> CC: dev@phoenix.apache.org
> Subject: Re: Re: HBase Cluster Down: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>
>
> Hi. anil
>
> I remember that you can find the client jars from
> $PHOENIX_HOME/phoenix-assembly/target with some kind of
> phoenix-4.0.0-incubating-client.jar sort of.
> If you rebuild phoenix source targeting on your hbase version, that would
> be the right place.
>
> Or you just want the original tar file for the incubating phoenix? You can
> find here:
> https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/
>
>
> Thanks,
> Sun.
>
> ------------------------------
> ------------------------------
>
> CertusNet
>
>
> *From:* anil gupta <an...@gmail.com>
> *Date:* 2015-03-06 09:26
> *To:* user@phoenix.apache.org; James Taylor <ja...@apache.org>
> *CC:* dev <de...@phoenix.apache.org>
> *Subject:* Re: HBase Cluster Down: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> I have tried to disable the table but since none of the RS are coming up.
> I am unable to do it. Am i missing something?
> On the server side, we were using the "4.0.0-incubating". It seems like my
> only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> to be UP. I just want my cluster to come and then i will disable the table
> that has a Phoenix view.
> What would be the possible side effects of using Phoenix 4.1 with
> HDP2.1.5.
> Even after updating to Phoenix4.1, if the problem is not fixed. What is
> the next alternative?
>
>
> On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
>
> Hi Anil,
>
> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> trying out a newer version? As James says, the upgrade must be servers
> first, then client. Also, Phoenix versions tend to be picky about their
> underlying HBase version.
>
> You can also try altering the now-broken phoenix tables via HBase shell,
> removing the phoenix coprocessor. I've tried this in the past with other
> coprocessor-loading woes and had mixed results. Try: disable table, alter
> table, enable table. There's still sharp edges around coprocessor-based
> deployment.
>
> Keep us posted, and sorry for the mess.
>
> -n
>
> [0]:
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>
> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>
> Unfortunately, we ran out of luck on this one because we are not running
> the latest version of HBase. This property was introduced recently:
> https://issues.apache.org/jira/browse/HBASE-13044 :(
> Thanks, Vladimir.
>
> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
> wrote:
>
> Try the following:
>
> Update hbase-site.xml config, set
>
> hbase.coprocessor.enabed=false
>
> or:
>
> hbase.coprocessor.user.enabed=false
>
> sync config across cluster.
>
> restart the cluster
>
> than update your table's settings in hbase shell
>
> -Vlad
>
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
We copied Phoenix4.1 on our RS and restarted the cluster. It came UP.
Sigh! Copying jar fixed it!.. This was one of the most nasty situation i've
ever run into HBase.
After the restart,  i truncated 'SYSTEM.CATALOG' and "SYSTEM.SEQUENCE". I
also removed all the Phoenix Coprocessors from my table.

Thanks everyone for the prompt support. :)
I am going to try connecting 4.0 client with 4.0 server tomorrow and see
how it goes. Hopefully, i wont have any disaster.

~Anil Gupta

On Thu, Mar 5, 2015 at 6:38 PM, Jeffrey Zhong <je...@apache.org> wrote:

>
> Theoretically you can run Phoenix4.1 server jar on HDP2.1.5 but this
> combination isn't tested. So you should not try it in a production env.
>
>
> In order to workaround the coprocessor issue, you can try to rename the
> table folder in hdfs, then restart region servers so meta region will be
> assigned. You can then disable the problematic table, rename the hdfs table
> folder back and then alter table to remove the coprocessor.
>
>
> Another thing is that once you used phoenix4.1 client connect to a phoenix
> cluster, the cluster system table schema are upgraded to 4.1 version.  In
> 4.1 version, it seems to me there are only two new columns created in
> catalog table and should be all right to continue use 4.0 Phoenix client.
>
> ------------------------------
> Date: Fri, 6 Mar 2015 09:37:32 +0800
> From: sunfl@certusnet.com.cn
> To: user@phoenix.apache.org; jamestaylor@apache.org
> CC: dev@phoenix.apache.org
> Subject: Re: Re: HBase Cluster Down: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>
>
> Hi. anil
>
> I remember that you can find the client jars from
> $PHOENIX_HOME/phoenix-assembly/target with some kind of
> phoenix-4.0.0-incubating-client.jar sort of.
> If you rebuild phoenix source targeting on your hbase version, that would
> be the right place.
>
> Or you just want the original tar file for the incubating phoenix? You can
> find here:
> https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/
>
>
> Thanks,
> Sun.
>
> ------------------------------
> ------------------------------
>
> CertusNet
>
>
> *From:* anil gupta <an...@gmail.com>
> *Date:* 2015-03-06 09:26
> *To:* user@phoenix.apache.org; James Taylor <ja...@apache.org>
> *CC:* dev <de...@phoenix.apache.org>
> *Subject:* Re: HBase Cluster Down: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> I have tried to disable the table but since none of the RS are coming up.
> I am unable to do it. Am i missing something?
> On the server side, we were using the "4.0.0-incubating". It seems like my
> only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> to be UP. I just want my cluster to come and then i will disable the table
> that has a Phoenix view.
> What would be the possible side effects of using Phoenix 4.1 with
> HDP2.1.5.
> Even after updating to Phoenix4.1, if the problem is not fixed. What is
> the next alternative?
>
>
> On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
>
> Hi Anil,
>
> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> trying out a newer version? As James says, the upgrade must be servers
> first, then client. Also, Phoenix versions tend to be picky about their
> underlying HBase version.
>
> You can also try altering the now-broken phoenix tables via HBase shell,
> removing the phoenix coprocessor. I've tried this in the past with other
> coprocessor-loading woes and had mixed results. Try: disable table, alter
> table, enable table. There's still sharp edges around coprocessor-based
> deployment.
>
> Keep us posted, and sorry for the mess.
>
> -n
>
> [0]:
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>
> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>
> Unfortunately, we ran out of luck on this one because we are not running
> the latest version of HBase. This property was introduced recently:
> https://issues.apache.org/jira/browse/HBASE-13044 :(
> Thanks, Vladimir.
>
> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
> wrote:
>
> Try the following:
>
> Update hbase-site.xml config, set
>
> hbase.coprocessor.enabed=false
>
> or:
>
> hbase.coprocessor.user.enabed=false
>
> sync config across cluster.
>
> restart the cluster
>
> than update your table's settings in hbase shell
>
> -Vlad
>
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>
>


-- 
Thanks & Regards,
Anil Gupta

RE: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Jeffrey Zhong <je...@apache.org>.









Theoretically you can run Phoenix4.1 server jar on HDP2.1.5 but this combination isn't tested. So you should not try it in a production env.


In order to workaround the coprocessor issue, you can try to rename the table folder in hdfs, then restart region servers so meta region will be assigned. You can then disable the problematic table, rename the hdfs table folder back and then alter table to remove the coprocessor.


Another thing is that once you used phoenix4.1 client connect to a phoenix cluster, the cluster system table schema are upgraded to 4.1 version.  In 4.1 version, it seems to me there are only two new columns created in catalog table and should be all right to continue use 4.0 Phoenix client.
Date: Fri, 6 Mar 2015 09:37:32 +0800
From: sunfl@certusnet.com.cn
To: user@phoenix.apache.org; jamestaylor@apache.org
CC: dev@phoenix.apache.org
Subject: Re: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter


Hi. anil
I remember that you can find the client jars from $PHOENIX_HOME/phoenix-assembly/target with some kind of phoenix-4.0.0-incubating-client.jar sort of.If you rebuild phoenix source targeting on your hbase version, that would be the right place. 
Or you just want the original tar file for the incubating phoenix? You can find here: https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/ 
Thanks,Sun.


CertusNet 
From: anil guptaDate: 2015-03-06 09:26To: user@phoenix.apache.org; James TaylorCC: devSubject: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter@James: Could you point me to a place where i can find tar file of Phoenix-4.0.0-incubating release? All the links on this page are broken: http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
I have tried to disable the table but since none of the RS are coming up. I am unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my only option is to upgrade the server to 4.1.  At-least, the HBase cluster to be UP. I just want my cluster to come and then i will disable the table that has a Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5. 
Even after updating to Phoenix4.1, if the problem is not fixed. What is the next alternative?


On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
Hi Anil,
HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or trying out a newer version? As James says, the upgrade must be servers first, then client. Also, Phoenix versions tend to be picky about their underlying HBase version.
You can also try altering the now-broken phoenix tables via HBase shell, removing the phoenix coprocessor. I've tried this in the past with other coprocessor-loading woes and had mixed results. Try: disable table, alter table, enable table. There's still sharp edges around coprocessor-based deployment.
Keep us posted, and sorry for the mess.
-n
[0]: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
Unfortunately, we ran out of luck on this one because we are not running the latest version of HBase. This property was introduced recently: https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.

On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com> wrote:
Try the following:
Update hbase-site.xml config, set 
hbase.coprocessor.enabed=false
or:
hbase.coprocessor.user.enabed=false
sync config across cluster.
restart the cluster
than update your table's settings in hbase shell 
-Vlad

On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
Hi All,

I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running Phoenix4.1 client because i could not find tar file for "Phoenix4-0.0-incubating". 
I tried to create a view on existing table and then my entire cluster went down(all the RS went down. MAster is still up).


This is the exception i am seeing:
2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exceptionjava.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)        at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:744)


We tried to restart the cluster. It died again. It seems, its stucks at this point looking for 
LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.

I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.

-- 
Thanks & Regards,
Anil Gupta





-- 
Thanks & Regards,
Anil Gupta





-- 
Thanks & Regards,
Anil Gupta



-- 
Thanks & Regards,
Anil Gupta

 		 	   		  

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
As a terrible hack, you may be able to create a jar containing a noop
coprocessor called org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
(make sure it extends BaseRegionObserver) and drop it into the RS lib
directory, restart the process. That should satisfy this "table descriptor
death pill", allowing it to come up far enough that you can remove the
coprocessor using the alter table procedure i described earlier.

-n

On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:

> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> > I have tried to disable the table but since none of the RS are coming up.
> > I am unable to do it. Am i missing something?
> > On the server side, we were using the "4.0.0-incubating". It seems like
> my
> > only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> > to be UP. I just want my cluster to come and then i will disable the
> table
> > that has a Phoenix view.
> > What would be the possible side effects of using Phoenix 4.1 with
> > HDP2.1.5.
> > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > the next alternative?
> >
> >
> > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
> >
> >> Hi Anil,
> >>
> >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> >> trying out a newer version? As James says, the upgrade must be servers
> >> first, then client. Also, Phoenix versions tend to be picky about their
> >> underlying HBase version.
> >>
> >> You can also try altering the now-broken phoenix tables via HBase shell,
> >> removing the phoenix coprocessor. I've tried this in the past with other
> >> coprocessor-loading woes and had mixed results. Try: disable table,
> alter
> >> table, enable table. There's still sharp edges around coprocessor-based
> >> deployment.
> >>
> >> Keep us posted, and sorry for the mess.
> >>
> >> -n
> >>
> >> [0]:
> >>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >>
> >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >>> Unfortunately, we ran out of luck on this one because we are not
> running
> >>> the latest version of HBase. This property was introduced recently:
> >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >>> Thanks, Vladimir.
> >>>
> >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >>> vladrodionov@gmail.com> wrote:
> >>>
> >>>> Try the following:
> >>>>
> >>>> Update hbase-site.xml config, set
> >>>>
> >>>> hbase.coprocessor.enabed=false
> >>>>
> >>>> or:
> >>>>
> >>>> hbase.coprocessor.user.enabed=false
> >>>>
> >>>> sync config across cluster.
> >>>>
> >>>> restart the cluster
> >>>>
> >>>> than update your table's settings in hbase shell
> >>>>
> >>>> -Vlad
> >>>>
> >>>>
> >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Hi All,
> >>>>>
> >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> >>>>> Phoenix4.1 client because i could not find tar file for
> >>>>> "Phoenix4-0.0-incubating".
> >>>>> I tried to create a view on existing table and then my entire cluster
> >>>>> went down(all the RS went down. MAster is still up).
> >>>>>
> >>>>>
> >>>>> This is the exception i am seeing:
> >>>>>
> >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136:
> The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> threw an unexpected exception
> >>>>> java.io.IOException: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >>>>>         at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >>>>>         at
> sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >>>>>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>>>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >>>>>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>         at java.lang.Thread.run(Thread.java:744)
> >>>>>
> >>>>>
> >>>>> We tried to restart the cluster. It died again. It seems, its stucks
> at this point looking for
> >>>>>
> >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> anything in the cluster until we fix it.
> >>>>>
> >>>>> I was thinking of disabling those tables but none of the RS is
> coming up. Can anyone suggest me how can i bail out of this BAD situation.
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Thanks & Regards,
> >>>>> Anil Gupta
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Thanks & Regards,
> >>> Anil Gupta
> >>>
> >>
> >>
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

RE: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Jeffrey Zhong <je...@apache.org>.









Theoretically you can run Phoenix4.1 server jar on HDP2.1.5 but this combination isn't tested. So you should not try it in a production env.


In order to workaround the coprocessor issue, you can try to rename the table folder in hdfs, then restart region servers so meta region will be assigned. You can then disable the problematic table, rename the hdfs table folder back and then alter table to remove the coprocessor.


Another thing is that once you used phoenix4.1 client connect to a phoenix cluster, the cluster system table schema are upgraded to 4.1 version.  In 4.1 version, it seems to me there are only two new columns created in catalog table and should be all right to continue use 4.0 Phoenix client.
Date: Fri, 6 Mar 2015 09:37:32 +0800
From: sunfl@certusnet.com.cn
To: user@phoenix.apache.org; jamestaylor@apache.org
CC: dev@phoenix.apache.org
Subject: Re: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter


Hi. anil
I remember that you can find the client jars from $PHOENIX_HOME/phoenix-assembly/target with some kind of phoenix-4.0.0-incubating-client.jar sort of.If you rebuild phoenix source targeting on your hbase version, that would be the right place. 
Or you just want the original tar file for the incubating phoenix? You can find here: https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/ 
Thanks,Sun.


CertusNet 
From: anil guptaDate: 2015-03-06 09:26To: user@phoenix.apache.org; James TaylorCC: devSubject: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter@James: Could you point me to a place where i can find tar file of Phoenix-4.0.0-incubating release? All the links on this page are broken: http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
I have tried to disable the table but since none of the RS are coming up. I am unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my only option is to upgrade the server to 4.1.  At-least, the HBase cluster to be UP. I just want my cluster to come and then i will disable the table that has a Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5. 
Even after updating to Phoenix4.1, if the problem is not fixed. What is the next alternative?


On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
Hi Anil,
HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or trying out a newer version? As James says, the upgrade must be servers first, then client. Also, Phoenix versions tend to be picky about their underlying HBase version.
You can also try altering the now-broken phoenix tables via HBase shell, removing the phoenix coprocessor. I've tried this in the past with other coprocessor-loading woes and had mixed results. Try: disable table, alter table, enable table. There's still sharp edges around coprocessor-based deployment.
Keep us posted, and sorry for the mess.
-n
[0]: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
Unfortunately, we ran out of luck on this one because we are not running the latest version of HBase. This property was introduced recently: https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.

On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com> wrote:
Try the following:
Update hbase-site.xml config, set 
hbase.coprocessor.enabed=false
or:
hbase.coprocessor.user.enabed=false
sync config across cluster.
restart the cluster
than update your table's settings in hbase shell 
-Vlad

On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
Hi All,

I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running Phoenix4.1 client because i could not find tar file for "Phoenix4-0.0-incubating". 
I tried to create a view on existing table and then my entire cluster went down(all the RS went down. MAster is still up).


This is the exception i am seeing:
2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exceptionjava.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)        at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:744)


We tried to restart the cluster. It died again. It seems, its stucks at this point looking for 
LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.

I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.

-- 
Thanks & Regards,
Anil Gupta





-- 
Thanks & Regards,
Anil Gupta





-- 
Thanks & Regards,
Anil Gupta



-- 
Thanks & Regards,
Anil Gupta

 		 	   		  

Re: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Fulin Sun <su...@certusnet.com.cn>.
Hi. anil

I remember that you can find the client jars from $PHOENIX_HOME/phoenix-assembly/target with some kind of phoenix-4.0.0-incubating-client.jar sort of.
If you rebuild phoenix source targeting on your hbase version, that would be the right place. 

Or you just want the original tar file for the incubating phoenix? You can find here: https://archive.apache.org/dist/incubator/phoenix/phoenix-4.0.0-incubating/src/ 

Thanks,
Sun.





CertusNet 

From: anil gupta
Date: 2015-03-06 09:26
To: user@phoenix.apache.org; James Taylor
CC: dev
Subject: Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
@James: Could you point me to a place where i can find tar file of Phoenix-4.0.0-incubating release? All the links on this page are broken: http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
I have tried to disable the table but since none of the RS are coming up. I am unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my only option is to upgrade the server to 4.1.  At-least, the HBase cluster to be UP. I just want my cluster to come and then i will disable the table that has a Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5. 
Even after updating to Phoenix4.1, if the problem is not fixed. What is the next alternative?


On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
Hi Anil,

HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or trying out a newer version? As James says, the upgrade must be servers first, then client. Also, Phoenix versions tend to be picky about their underlying HBase version.

You can also try altering the now-broken phoenix tables via HBase shell, removing the phoenix coprocessor. I've tried this in the past with other coprocessor-loading woes and had mixed results. Try: disable table, alter table, enable table. There's still sharp edges around coprocessor-based deployment.

Keep us posted, and sorry for the mess.

-n

[0]: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html

On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
Unfortunately, we ran out of luck on this one because we are not running the latest version of HBase. This property was introduced recently: https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.

On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com> wrote:
Try the following:

Update hbase-site.xml config, set 

hbase.coprocessor.enabed=false

or:

hbase.coprocessor.user.enabed=false

sync config across cluster.

restart the cluster

than update your table's settings in hbase shell 

-Vlad


On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
Hi All,

I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running Phoenix4.1 client because i could not find tar file for "Phoenix4-0.0-incubating". 
I tried to create a view on existing table and then my entire cluster went down(all the RS went down. MAster is still up).


This is the exception i am seeing:
2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
        at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)


We tried to restart the cluster. It died again. It seems, its stucks at this point looking for 
LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.

I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.

-- 
Thanks & Regards,
Anil Gupta




-- 
Thanks & Regards,
Anil Gupta




-- 
Thanks & Regards,
Anil Gupta



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Ted Yu <yu...@gmail.com>.
Ani:
You can find Phoenix release artifacts here:
http://archive.apache.org/dist/phoenix/

e.g. for 4.1.0:
http://archive.apache.org/dist/phoenix/phoenix-4.1.0/bin/

Cheers

On Thu, Mar 5, 2015 at 5:26 PM, anil gupta <an...@gmail.com> wrote:

> @James: Could you point me to a place where i can find tar file of
> Phoenix-4.0.0-incubating release? All the links on this page are broken:
> http://www.apache.org/dyn/closer.cgi/incubator/phoenix/
>
> On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:
>
> > I have tried to disable the table but since none of the RS are coming up.
> > I am unable to do it. Am i missing something?
> > On the server side, we were using the "4.0.0-incubating". It seems like
> my
> > only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> > to be UP. I just want my cluster to come and then i will disable the
> table
> > that has a Phoenix view.
> > What would be the possible side effects of using Phoenix 4.1 with
> > HDP2.1.5.
> > Even after updating to Phoenix4.1, if the problem is not fixed. What is
> > the next alternative?
> >
> >
> > On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
> >
> >> Hi Anil,
> >>
> >> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> >> trying out a newer version? As James says, the upgrade must be servers
> >> first, then client. Also, Phoenix versions tend to be picky about their
> >> underlying HBase version.
> >>
> >> You can also try altering the now-broken phoenix tables via HBase shell,
> >> removing the phoenix coprocessor. I've tried this in the past with other
> >> coprocessor-loading woes and had mixed results. Try: disable table,
> alter
> >> table, enable table. There's still sharp edges around coprocessor-based
> >> deployment.
> >>
> >> Keep us posted, and sorry for the mess.
> >>
> >> -n
> >>
> >> [0]:
> >>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
> >>
> >> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com>
> wrote:
> >>
> >>> Unfortunately, we ran out of luck on this one because we are not
> running
> >>> the latest version of HBase. This property was introduced recently:
> >>> https://issues.apache.org/jira/browse/HBASE-13044 :(
> >>> Thanks, Vladimir.
> >>>
> >>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
> >>> vladrodionov@gmail.com> wrote:
> >>>
> >>>> Try the following:
> >>>>
> >>>> Update hbase-site.xml config, set
> >>>>
> >>>> hbase.coprocessor.enabed=false
> >>>>
> >>>> or:
> >>>>
> >>>> hbase.coprocessor.user.enabed=false
> >>>>
> >>>> sync config across cluster.
> >>>>
> >>>> restart the cluster
> >>>>
> >>>> than update your table's settings in hbase shell
> >>>>
> >>>> -Vlad
> >>>>
> >>>>
> >>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> Hi All,
> >>>>>
> >>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> >>>>> Phoenix4.1 client because i could not find tar file for
> >>>>> "Phoenix4-0.0-incubating".
> >>>>> I tried to create a view on existing table and then my entire cluster
> >>>>> went down(all the RS went down. MAster is still up).
> >>>>>
> >>>>>
> >>>>> This is the exception i am seeing:
> >>>>>
> >>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136:
> The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> threw an unexpected exception
> >>>>> java.io.IOException: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
> >>>>>         at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
> >>>>>         at
> sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
> >>>>>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >>>>>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
> >>>>>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
> >>>>>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >>>>>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >>>>>         at java.lang.Thread.run(Thread.java:744)
> >>>>>
> >>>>>
> >>>>> We tried to restart the cluster. It died again. It seems, its stucks
> at this point looking for
> >>>>>
> >>>>> LocalIndexSplitter class. How can i resolve this error? We cant do
> anything in the cluster until we fix it.
> >>>>>
> >>>>> I was thinking of disabling those tables but none of the RS is
> coming up. Can anyone suggest me how can i bail out of this BAD situation.
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Thanks & Regards,
> >>>>> Anil Gupta
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Thanks & Regards,
> >>> Anil Gupta
> >>>
> >>
> >>
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
@James: Could you point me to a place where i can find tar file of
Phoenix-4.0.0-incubating release? All the links on this page are broken:
http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:

> I have tried to disable the table but since none of the RS are coming up.
> I am unable to do it. Am i missing something?
> On the server side, we were using the "4.0.0-incubating". It seems like my
> only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> to be UP. I just want my cluster to come and then i will disable the table
> that has a Phoenix view.
> What would be the possible side effects of using Phoenix 4.1 with
> HDP2.1.5.
> Even after updating to Phoenix4.1, if the problem is not fixed. What is
> the next alternative?
>
>
> On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
>
>> Hi Anil,
>>
>> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
>> trying out a newer version? As James says, the upgrade must be servers
>> first, then client. Also, Phoenix versions tend to be picky about their
>> underlying HBase version.
>>
>> You can also try altering the now-broken phoenix tables via HBase shell,
>> removing the phoenix coprocessor. I've tried this in the past with other
>> coprocessor-loading woes and had mixed results. Try: disable table, alter
>> table, enable table. There's still sharp edges around coprocessor-based
>> deployment.
>>
>> Keep us posted, and sorry for the mess.
>>
>> -n
>>
>> [0]:
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>
>> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>>
>>> Unfortunately, we ran out of luck on this one because we are not running
>>> the latest version of HBase. This property was introduced recently:
>>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>> Thanks, Vladimir.
>>>
>>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>> vladrodionov@gmail.com> wrote:
>>>
>>>> Try the following:
>>>>
>>>> Update hbase-site.xml config, set
>>>>
>>>> hbase.coprocessor.enabed=false
>>>>
>>>> or:
>>>>
>>>> hbase.coprocessor.user.enabed=false
>>>>
>>>> sync config across cluster.
>>>>
>>>> restart the cluster
>>>>
>>>> than update your table's settings in hbase shell
>>>>
>>>> -Vlad
>>>>
>>>>
>>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>>>> Phoenix4.1 client because i could not find tar file for
>>>>> "Phoenix4-0.0-incubating".
>>>>> I tried to create a view on existing table and then my entire cluster
>>>>> went down(all the RS went down. MAster is still up).
>>>>>
>>>>>
>>>>> This is the exception i am seeing:
>>>>>
>>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>>
>>>>>
>>>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>>>
>>>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>>>
>>>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>>>
>>>>>
>>>>> --
>>>>> Thanks & Regards,
>>>>> Anil Gupta
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
@James: Could you point me to a place where i can find tar file of
Phoenix-4.0.0-incubating release? All the links on this page are broken:
http://www.apache.org/dyn/closer.cgi/incubator/phoenix/

On Thu, Mar 5, 2015 at 5:04 PM, anil gupta <an...@gmail.com> wrote:

> I have tried to disable the table but since none of the RS are coming up.
> I am unable to do it. Am i missing something?
> On the server side, we were using the "4.0.0-incubating". It seems like my
> only option is to upgrade the server to 4.1.  At-least, the HBase cluster
> to be UP. I just want my cluster to come and then i will disable the table
> that has a Phoenix view.
> What would be the possible side effects of using Phoenix 4.1 with
> HDP2.1.5.
> Even after updating to Phoenix4.1, if the problem is not fixed. What is
> the next alternative?
>
>
> On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:
>
>> Hi Anil,
>>
>> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
>> trying out a newer version? As James says, the upgrade must be servers
>> first, then client. Also, Phoenix versions tend to be picky about their
>> underlying HBase version.
>>
>> You can also try altering the now-broken phoenix tables via HBase shell,
>> removing the phoenix coprocessor. I've tried this in the past with other
>> coprocessor-loading woes and had mixed results. Try: disable table, alter
>> table, enable table. There's still sharp edges around coprocessor-based
>> deployment.
>>
>> Keep us posted, and sorry for the mess.
>>
>> -n
>>
>> [0]:
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>>
>> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>>
>>> Unfortunately, we ran out of luck on this one because we are not running
>>> the latest version of HBase. This property was introduced recently:
>>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>>> Thanks, Vladimir.
>>>
>>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <
>>> vladrodionov@gmail.com> wrote:
>>>
>>>> Try the following:
>>>>
>>>> Update hbase-site.xml config, set
>>>>
>>>> hbase.coprocessor.enabed=false
>>>>
>>>> or:
>>>>
>>>> hbase.coprocessor.user.enabed=false
>>>>
>>>> sync config across cluster.
>>>>
>>>> restart the cluster
>>>>
>>>> than update your table's settings in hbase shell
>>>>
>>>> -Vlad
>>>>
>>>>
>>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>>>> Phoenix4.1 client because i could not find tar file for
>>>>> "Phoenix4-0.0-incubating".
>>>>> I tried to create a view on existing table and then my entire cluster
>>>>> went down(all the RS went down. MAster is still up).
>>>>>
>>>>>
>>>>> This is the exception i am seeing:
>>>>>
>>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>>
>>>>>
>>>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>>>
>>>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>>>
>>>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>>>
>>>>>
>>>>> --
>>>>> Thanks & Regards,
>>>>> Anil Gupta
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
I have tried to disable the table but since none of the RS are coming up. I
am unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my
only option is to upgrade the server to 4.1.  At-least, the HBase cluster
to be UP. I just want my cluster to come and then i will disable the table
that has a Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5.
Even after updating to Phoenix4.1, if the problem is not fixed. What is the
next alternative?


On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:

> Hi Anil,
>
> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> trying out a newer version? As James says, the upgrade must be servers
> first, then client. Also, Phoenix versions tend to be picky about their
> underlying HBase version.
>
> You can also try altering the now-broken phoenix tables via HBase shell,
> removing the phoenix coprocessor. I've tried this in the past with other
> coprocessor-loading woes and had mixed results. Try: disable table, alter
> table, enable table. There's still sharp edges around coprocessor-based
> deployment.
>
> Keep us posted, and sorry for the mess.
>
> -n
>
> [0]:
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>
> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>
>> Unfortunately, we ran out of luck on this one because we are not running
>> the latest version of HBase. This property was introduced recently:
>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> Thanks, Vladimir.
>>
>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vladrodionov@gmail.com
>> > wrote:
>>
>>> Try the following:
>>>
>>> Update hbase-site.xml config, set
>>>
>>> hbase.coprocessor.enabed=false
>>>
>>> or:
>>>
>>> hbase.coprocessor.user.enabed=false
>>>
>>> sync config across cluster.
>>>
>>> restart the cluster
>>>
>>> than update your table's settings in hbase shell
>>>
>>> -Vlad
>>>
>>>
>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>>> Phoenix4.1 client because i could not find tar file for
>>>> "Phoenix4-0.0-incubating".
>>>> I tried to create a view on existing table and then my entire cluster
>>>> went down(all the RS went down. MAster is still up).
>>>>
>>>>
>>>> This is the exception i am seeing:
>>>>
>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>
>>>>
>>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>>
>>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>>
>>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>>
>>>>
>>>> --
>>>> Thanks & Regards,
>>>> Anil Gupta
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
I have tried to disable the table but since none of the RS are coming up. I
am unable to do it. Am i missing something?
On the server side, we were using the "4.0.0-incubating". It seems like my
only option is to upgrade the server to 4.1.  At-least, the HBase cluster
to be UP. I just want my cluster to come and then i will disable the table
that has a Phoenix view.
What would be the possible side effects of using Phoenix 4.1 with HDP2.1.5.
Even after updating to Phoenix4.1, if the problem is not fixed. What is the
next alternative?


On Thu, Mar 5, 2015 at 4:54 PM, Nick Dimiduk <nd...@gmail.com> wrote:

> Hi Anil,
>
> HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
> trying out a newer version? As James says, the upgrade must be servers
> first, then client. Also, Phoenix versions tend to be picky about their
> underlying HBase version.
>
> You can also try altering the now-broken phoenix tables via HBase shell,
> removing the phoenix coprocessor. I've tried this in the past with other
> coprocessor-loading woes and had mixed results. Try: disable table, alter
> table, enable table. There's still sharp edges around coprocessor-based
> deployment.
>
> Keep us posted, and sorry for the mess.
>
> -n
>
> [0]:
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html
>
> On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:
>
>> Unfortunately, we ran out of luck on this one because we are not running
>> the latest version of HBase. This property was introduced recently:
>> https://issues.apache.org/jira/browse/HBASE-13044 :(
>> Thanks, Vladimir.
>>
>> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vladrodionov@gmail.com
>> > wrote:
>>
>>> Try the following:
>>>
>>> Update hbase-site.xml config, set
>>>
>>> hbase.coprocessor.enabed=false
>>>
>>> or:
>>>
>>> hbase.coprocessor.user.enabed=false
>>>
>>> sync config across cluster.
>>>
>>> restart the cluster
>>>
>>> than update your table's settings in hbase shell
>>>
>>> -Vlad
>>>
>>>
>>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>>> Phoenix4.1 client because i could not find tar file for
>>>> "Phoenix4-0.0-incubating".
>>>> I tried to create a view on existing table and then my entire cluster
>>>> went down(all the RS went down. MAster is still up).
>>>>
>>>>
>>>> This is the exception i am seeing:
>>>>
>>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>
>>>>
>>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>>
>>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>>
>>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>>
>>>>
>>>> --
>>>> Thanks & Regards,
>>>> Anil Gupta
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
Hi Anil,

HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
trying out a newer version? As James says, the upgrade must be servers
first, then client. Also, Phoenix versions tend to be picky about their
underlying HBase version.

You can also try altering the now-broken phoenix tables via HBase shell,
removing the phoenix coprocessor. I've tried this in the past with other
coprocessor-loading woes and had mixed results. Try: disable table, alter
table, enable table. There's still sharp edges around coprocessor-based
deployment.

Keep us posted, and sorry for the mess.

-n

[0]:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html

On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:

> Unfortunately, we ran out of luck on this one because we are not running
> the latest version of HBase. This property was introduced recently:
> https://issues.apache.org/jira/browse/HBASE-13044 :(
> Thanks, Vladimir.
>
> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
> wrote:
>
>> Try the following:
>>
>> Update hbase-site.xml config, set
>>
>> hbase.coprocessor.enabed=false
>>
>> or:
>>
>> hbase.coprocessor.user.enabed=false
>>
>> sync config across cluster.
>>
>> restart the cluster
>>
>> than update your table's settings in hbase shell
>>
>> -Vlad
>>
>>
>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>> Phoenix4.1 client because i could not find tar file for
>>> "Phoenix4-0.0-incubating".
>>> I tried to create a view on existing table and then my entire cluster
>>> went down(all the RS went down. MAster is still up).
>>>
>>>
>>> This is the exception i am seeing:
>>>
>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>
>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>
>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Nick Dimiduk <nd...@gmail.com>.
Hi Anil,

HDP-2.1.5 ships with Phoenix [0]. Are you using the version shipped, or
trying out a newer version? As James says, the upgrade must be servers
first, then client. Also, Phoenix versions tend to be picky about their
underlying HBase version.

You can also try altering the now-broken phoenix tables via HBase shell,
removing the phoenix coprocessor. I've tried this in the past with other
coprocessor-loading woes and had mixed results. Try: disable table, alter
table, enable table. There's still sharp edges around coprocessor-based
deployment.

Keep us posted, and sorry for the mess.

-n

[0]:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_releasenotes_hdp_2.1/content/ch_relnotes-hdp-2.1.5-product.html

On Thu, Mar 5, 2015 at 4:34 PM, anil gupta <an...@gmail.com> wrote:

> Unfortunately, we ran out of luck on this one because we are not running
> the latest version of HBase. This property was introduced recently:
> https://issues.apache.org/jira/browse/HBASE-13044 :(
> Thanks, Vladimir.
>
> On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
> wrote:
>
>> Try the following:
>>
>> Update hbase-site.xml config, set
>>
>> hbase.coprocessor.enabed=false
>>
>> or:
>>
>> hbase.coprocessor.user.enabed=false
>>
>> sync config across cluster.
>>
>> restart the cluster
>>
>> than update your table's settings in hbase shell
>>
>> -Vlad
>>
>>
>> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>>> Phoenix4.1 client because i could not find tar file for
>>> "Phoenix4-0.0-incubating".
>>> I tried to create a view on existing table and then my entire cluster
>>> went down(all the RS went down. MAster is still up).
>>>
>>>
>>> This is the exception i am seeing:
>>>
>>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>>
>>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>>
>>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Anil Gupta
>>>
>>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Unfortunately, we ran out of luck on this one because we are not running
the latest version of HBase. This property was introduced recently:
https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.

On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
wrote:

> Try the following:
>
> Update hbase-site.xml config, set
>
> hbase.coprocessor.enabed=false
>
> or:
>
> hbase.coprocessor.user.enabed=false
>
> sync config across cluster.
>
> restart the cluster
>
> than update your table's settings in hbase shell
>
> -Vlad
>
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>> Phoenix4.1 client because i could not find tar file for
>> "Phoenix4-0.0-incubating".
>> I tried to create a view on existing table and then my entire cluster
>> went down(all the RS went down. MAster is still up).
>>
>>
>> This is the exception i am seeing:
>>
>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>>
>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>
>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>
>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by anil gupta <an...@gmail.com>.
Unfortunately, we ran out of luck on this one because we are not running
the latest version of HBase. This property was introduced recently:
https://issues.apache.org/jira/browse/HBASE-13044 :(
Thanks, Vladimir.

On Thu, Mar 5, 2015 at 3:44 PM, Vladimir Rodionov <vl...@gmail.com>
wrote:

> Try the following:
>
> Update hbase-site.xml config, set
>
> hbase.coprocessor.enabed=false
>
> or:
>
> hbase.coprocessor.user.enabed=false
>
> sync config across cluster.
>
> restart the cluster
>
> than update your table's settings in hbase shell
>
> -Vlad
>
>
> On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
>> Phoenix4.1 client because i could not find tar file for
>> "Phoenix4-0.0-incubating".
>> I tried to create a view on existing table and then my entire cluster
>> went down(all the RS went down. MAster is still up).
>>
>>
>> This is the exception i am seeing:
>>
>> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
>> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>>
>> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>>
>> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>>
>> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by Vladimir Rodionov <vl...@gmail.com>.
Try the following:

Update hbase-site.xml config, set

hbase.coprocessor.enabed=false

or:

hbase.coprocessor.user.enabed=false

sync config across cluster.

restart the cluster

than update your table's settings in hbase shell

-Vlad


On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:

> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2] regionserver.HRegionServer: ABORTING region server bigdatabox.com,60020,1423589420136: The coprocessor org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected exception
> java.io.IOException: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown Source)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up. Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: HBase Cluster Down: No jar path specified for org.apache.hadoop.hbase.regionserver.LocalIndexSplitter

Posted by James Taylor <ja...@apache.org>.
Hi Anil,
You should be able to put the Phoenix4.1 server jar on the RS and
Master nodes in the HBase lib dir and then bounce your cluster so that
HBase will find the LocalIndexSplitter class (if you're upgrading,
though, I'd recommend just switching to the latest - 4.3 instead).

Phoenix requires that you upgrade the server jar first to a newer
version followed eventually by the client jars (FWIW, a mix of client
jar versions is only supported as of 4.3+). You should never install a
newer minor version of Phoenix client against an older Phoenix server
jar - more here: http://phoenix.apache.org/upgrading.html

I filed PHOENIX-1703 so that this will be prevented from happening.

Thanks,
James


On Thu, Mar 5, 2015 at 3:32 PM, anil gupta <an...@gmail.com> wrote:
> Hi All,
>
> I am using HDP2.1.5, Phoenix4-0.0 was installed on RS. I was running
> Phoenix4.1 client because i could not find tar file for
> "Phoenix4-0.0-incubating".
> I tried to create a view on existing table and then my entire cluster went
> down(all the RS went down. MAster is still up).
>
>
> This is the exception i am seeing:
>
> 2015-03-05 14:30:53,296 FATAL [RS_OPEN_REGION-hdpslave8:60020-2]
> regionserver.HRegionServer: ABORTING region server
> bigdatabox.com,60020,1423589420136: The coprocessor
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter threw an unexpected
> exception
> java.io.IOException: No jar path specified for
> org.apache.hadoop.hbase.regionserver.LocalIndexSplitter
>         at
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:177)
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:555)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:462)
>         at sun.reflect.GeneratedConstructorAccessor33.newInstance(Unknown
> Source)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4119)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4430)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4403)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4359)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4310)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>         at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>         at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
> We tried to restart the cluster. It died again. It seems, its stucks at this
> point looking for
>
> LocalIndexSplitter class. How can i resolve this error? We cant do anything
> in the cluster until we fix it.
>
> I was thinking of disabling those tables but none of the RS is coming up.
> Can anyone suggest me how can i bail out of this BAD situation.
>
>
> --
> Thanks & Regards,
> Anil Gupta