You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by Siddhi Mehta <sm...@gmail.com> on 2015/08/20 04:39:50 UTC

PhoenixHbaseStorage on secure cluster

Hey Guys


I am trying to make use of the PhoenixHbaseStorage to write to Hbase Table.


The way we start this pig job is from within a map task(Similar to oozie)


I run TableMapReduceUtil.initCredentials(job) on the client to get the
correct AuthTokens for my map task


I have ensured that hbase-site.xml is on the classpath for the pigjob and
also hbase-client and hbase-server jars.


Any ideas on what could I be missing?


I am using Phoenix4.5 version and hbase 0.98.13


I see the following exception in the the logs of the pig job that tries
writing to hbase



Aug 20, 2015 12:04:31 AM
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper <init>
INFO: Process identifier=hconnection-0x3c1e23ff connecting to ZooKeeper
ensemble=hmaster1:2181,hmaster2:2181,hmaster3:2181
Aug 20, 2015 12:04:31 AM
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
makeStub
INFO: getMaster attempt 1 of 35 failed; retrying after sleep of 100,
exception=com.google.protobuf.ServiceException:
java.lang.NullPointerException
Aug 20, 2015 12:04:31 AM
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
makeStub
INFO: getMaster attempt 2 of 35 failed; retrying after sleep of 200,
exception=com.google.protobuf.ServiceException: java.io.IOException: Call
to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
exception: java.io.EOFException
Aug 20, 2015 12:04:31 AM
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
makeStub
INFO: getMaster attempt 3 of 35 failed; retrying after sleep of 300,
exception=com.google.protobuf.ServiceException: java.io.IOException: Call
to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
exception: java.io.EOFException
Aug 20, 2015 12:04:31 AM
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
makeStub
INFO: getMaster attempt 4 of 35 failed; retrying after sleep of 500,
exception=com.google.protobuf.ServiceException: java.io.IOException: Call
to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
exception: java.io.EOFException
Aug 20, 2015 12:04:32 AM:

Re: PhoenixHbaseStorage on secure cluster

Posted by Ravi Kiran <ma...@gmail.com>.
Hi Siddhi,
   I remember the fix was done and tested as part of
https://issues.apache.org/jira/browse/PHOENIX-1078 .  If possible, can you
go a bit deeper in explaining how you are calling PhoenixHBaseStorage from
a map task.

Regards
Ravi

On Thu, Aug 20, 2015 at 6:54 PM, Siddhi Mehta <si...@gmail.com> wrote:

> Hey Guys,
>
> Just wanted to ping once again and see if anyone has tried phoenix-pig
> integration job against the secure hbase cluster.
> Pig job started from within a map task.
>
>
> I see the following exception in the HMaster logs
> ipc.RpcServer - RpcServer.listener,port=60000: count of bytes read: 0
> org.apache.hadoop.hbase.security.AccessDeniedException: Authentication is
> required
>         at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1516)
>         at
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:856)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:647)
>         at
>
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:622)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> Somewhere in the flow my HBASE_AUTH_TOKEN is being messed up.
>
> --Siddhi
>
> On Wed, Aug 19, 2015 at 7:39 PM, Siddhi Mehta <sm...@gmail.com> wrote:
>
> > Hey Guys
> >
> >
> > I am trying to make use of the PhoenixHbaseStorage to write to Hbase
> Table.
> >
> >
> > The way we start this pig job is from within a map task(Similar to oozie)
> >
> >
> > I run TableMapReduceUtil.initCredentials(job) on the client to get the
> > correct AuthTokens for my map task
> >
> >
> > I have ensured that hbase-site.xml is on the classpath for the pigjob and
> > also hbase-client and hbase-server jars.
> >
> >
> > Any ideas on what could I be missing?
> >
> >
> > I am using Phoenix4.5 version and hbase 0.98.13
> >
> >
> > I see the following exception in the the logs of the pig job that tries
> > writing to hbase
> >
> >
> >
> > Aug 20, 2015 12:04:31 AM
> > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper <init>
> > INFO: Process identifier=hconnection-0x3c1e23ff connecting to ZooKeeper
> > ensemble=hmaster1:2181,hmaster2:2181,hmaster3:2181
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 1 of 35 failed; retrying after sleep of 100,
> > exception=com.google.protobuf.ServiceException:
> > java.lang.NullPointerException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 2 of 35 failed; retrying after sleep of 200,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000
> <http://blitz2-mnds1-3-sfm.ops.sfdc.net/%7BIPAddress%7D:60000> failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 3 of 35 failed; retrying after sleep of 300,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000
> <http://blitz2-mnds1-3-sfm.ops.sfdc.net/%7BIPAddress%7D:60000> failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:31 AM
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > makeStub
> > INFO: getMaster attempt 4 of 35 failed; retrying after sleep of 500,
> > exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> > to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000
> <http://blitz2-mnds1-3-sfm.ops.sfdc.net/%7BIPAddress%7D:60000> failed on
> local
> > exception: java.io.EOFException
> > Aug 20, 2015 12:04:32 AM:
> >
>
>
>
> --
> Regards,
> Siddhi
>

Re: PhoenixHbaseStorage on secure cluster

Posted by Siddhi Mehta <si...@gmail.com>.
Hey Guys,

Just wanted to ping once again and see if anyone has tried phoenix-pig
integration job against the secure hbase cluster.
Pig job started from within a map task.


I see the following exception in the HMaster logs
ipc.RpcServer - RpcServer.listener,port=60000: count of bytes read: 0
org.apache.hadoop.hbase.security.AccessDeniedException: Authentication is
required
        at
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1516)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:856)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:647)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:622)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

Somewhere in the flow my HBASE_AUTH_TOKEN is being messed up.

--Siddhi

On Wed, Aug 19, 2015 at 7:39 PM, Siddhi Mehta <sm...@gmail.com> wrote:

> Hey Guys
>
>
> I am trying to make use of the PhoenixHbaseStorage to write to Hbase Table.
>
>
> The way we start this pig job is from within a map task(Similar to oozie)
>
>
> I run TableMapReduceUtil.initCredentials(job) on the client to get the
> correct AuthTokens for my map task
>
>
> I have ensured that hbase-site.xml is on the classpath for the pigjob and
> also hbase-client and hbase-server jars.
>
>
> Any ideas on what could I be missing?
>
>
> I am using Phoenix4.5 version and hbase 0.98.13
>
>
> I see the following exception in the the logs of the pig job that tries
> writing to hbase
>
>
>
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper <init>
> INFO: Process identifier=hconnection-0x3c1e23ff connecting to ZooKeeper
> ensemble=hmaster1:2181,hmaster2:2181,hmaster3:2181
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 1 of 35 failed; retrying after sleep of 100,
> exception=com.google.protobuf.ServiceException:
> java.lang.NullPointerException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 2 of 35 failed; retrying after sleep of 200,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 3 of 35 failed; retrying after sleep of 300,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:31 AM
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> makeStub
> INFO: getMaster attempt 4 of 35 failed; retrying after sleep of 500,
> exception=com.google.protobuf.ServiceException: java.io.IOException: Call
> to blitz2-mnds1-3-sfm.ops.sfdc.net/{IPAddress}:60000 failed on local
> exception: java.io.EOFException
> Aug 20, 2015 12:04:32 AM:
>



-- 
Regards,
Siddhi