You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ajinkya Kale <ka...@gmail.com> on 2016/01/21 02:41:27 UTC

HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

I have posted this on hbase user list but i thought makes more sense on
spark user list.
I am able to read the table in yarn-client mode from spark-shell but I have
exhausted all online forums for options to get it working in the
yarn-cluster mode through spark-submit.

I am using this code-example
http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase
to
read a hbase table using Spark with the only change of adding the
hbase.zookeeper.quorum through code as it is not picking it from the
hbase-site.xml.

Spark 1.5.3

HBase 0.98.0


Facing this error -

 16/01/20 12:56:59 WARN
client.ConnectionManager$HConnectionImplementation: Encountered
problems when prefetch hbase:meta table:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError: class
com.google.protobuf.HBaseZeroCopyByteString cannot access its
superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58
GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError:
com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59
GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError:
com/google/protobuf/HBaseZeroCopyByteString

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
    at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
    at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
    at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.RDD.take(RDD.scala:1276)

I tried adding the hbase protocol jar on spar-defaults.conf and in the
driver-classpath as suggested here
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html
but
no success.
Any suggestions ?

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ajinkya Kale <ka...@gmail.com>.
I tried --jars which supposedly does that but that did not work.

On Fri, Jan 22, 2016 at 4:33 PM Ajinkya Kale <ka...@gmail.com> wrote:

> Hi Ted,
> Is there a way for the executors to have the hbase-protocol jar on their
> classpath ?
>
> On Fri, Jan 22, 2016 at 4:00 PM Ted Yu <yu...@gmail.com> wrote:
>
>> The class path formations on driver and executors are different.
>>
>> Cheers
>>
>> On Fri, Jan 22, 2016 at 3:25 PM, Ajinkya Kale <ka...@gmail.com>
>> wrote:
>>
>>> Is this issue only when the computations are in distributed mode ?
>>> If I do (pseudo code) :
>>> rdd.collect.call_to_hbase  I dont get this error,
>>>
>>> but if I do :
>>> rdd.call_to_hbase.collect it throws this error.
>>>
>>> On Wed, Jan 20, 2016 at 6:50 PM Ajinkya Kale <ka...@gmail.com>
>>> wrote:
>>>
>>>> Unfortunately I cannot at this moment (not a decision I can make) :(
>>>>
>>>> On Wed, Jan 20, 2016 at 6:46 PM Ted Yu <yu...@gmail.com> wrote:
>>>>
>>>>> I am not aware of a workaround.
>>>>>
>>>>> Can you upgrade to 0.98.4+ release ?
>>>>>
>>>>> Cheers
>>>>>
>>>>> On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Ted,
>>>>>>
>>>>>> Thanks for responding.
>>>>>> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
>>>>>> HADOOP_CLASSPATH didnt work for me.
>>>>>>
>>>>>> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>>>>>>
>>>>>>> 0.98.0 didn't have fix from HBASE-11118
>>>>>>>
>>>>>>> Please upgrade your hbase version and try again.
>>>>>>>
>>>>>>> If still there is problem, please pastebin the stack trace.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <kaleajinkya@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> I have posted this on hbase user list but i thought makes more
>>>>>>>> sense on spark user list.
>>>>>>>> I am able to read the table in yarn-client mode from spark-shell
>>>>>>>> but I have exhausted all online forums for options to get it working in the
>>>>>>>> yarn-cluster mode through spark-submit.
>>>>>>>>
>>>>>>>> I am using this code-example
>>>>>>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>>>>>>> read a hbase table using Spark with the only change of adding the
>>>>>>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>>>>>>> hbase-site.xml.
>>>>>>>>
>>>>>>>> Spark 1.5.3
>>>>>>>>
>>>>>>>> HBase 0.98.0
>>>>>>>>
>>>>>>>>
>>>>>>>> Facing this error -
>>>>>>>>
>>>>>>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>>>>>>
>>>>>>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>>>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>>>>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>>>>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>>>>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>>>>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>>>>>>     at scala.Option.getOrElse(Option.scala:120)
>>>>>>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>>>>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>>>>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>>>>>>
>>>>>>>> I tried adding the hbase protocol jar on spar-defaults.conf and in
>>>>>>>> the driver-classpath as suggested here
>>>>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>>>>>>> no success.
>>>>>>>> Any suggestions ?
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ajinkya Kale <ka...@gmail.com>.
Hi Ted,
Is there a way for the executors to have the hbase-protocol jar on their
classpath ?

On Fri, Jan 22, 2016 at 4:00 PM Ted Yu <yu...@gmail.com> wrote:

> The class path formations on driver and executors are different.
>
> Cheers
>
> On Fri, Jan 22, 2016 at 3:25 PM, Ajinkya Kale <ka...@gmail.com>
> wrote:
>
>> Is this issue only when the computations are in distributed mode ?
>> If I do (pseudo code) :
>> rdd.collect.call_to_hbase  I dont get this error,
>>
>> but if I do :
>> rdd.call_to_hbase.collect it throws this error.
>>
>> On Wed, Jan 20, 2016 at 6:50 PM Ajinkya Kale <ka...@gmail.com>
>> wrote:
>>
>>> Unfortunately I cannot at this moment (not a decision I can make) :(
>>>
>>> On Wed, Jan 20, 2016 at 6:46 PM Ted Yu <yu...@gmail.com> wrote:
>>>
>>>> I am not aware of a workaround.
>>>>
>>>> Can you upgrade to 0.98.4+ release ?
>>>>
>>>> Cheers
>>>>
>>>> On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ted,
>>>>>
>>>>> Thanks for responding.
>>>>> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
>>>>> HADOOP_CLASSPATH didnt work for me.
>>>>>
>>>>> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>>>>>
>>>>>> 0.98.0 didn't have fix from HBASE-11118
>>>>>>
>>>>>> Please upgrade your hbase version and try again.
>>>>>>
>>>>>> If still there is problem, please pastebin the stack trace.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> I have posted this on hbase user list but i thought makes more sense
>>>>>>> on spark user list.
>>>>>>> I am able to read the table in yarn-client mode from spark-shell but
>>>>>>> I have exhausted all online forums for options to get it working in the
>>>>>>> yarn-cluster mode through spark-submit.
>>>>>>>
>>>>>>> I am using this code-example
>>>>>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>>>>>> read a hbase table using Spark with the only change of adding the
>>>>>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>>>>>> hbase-site.xml.
>>>>>>>
>>>>>>> Spark 1.5.3
>>>>>>>
>>>>>>> HBase 0.98.0
>>>>>>>
>>>>>>>
>>>>>>> Facing this error -
>>>>>>>
>>>>>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>>>>>
>>>>>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>>>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>>>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>>>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>>>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>>>>>     at scala.Option.getOrElse(Option.scala:120)
>>>>>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>>>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>>>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>>>>>
>>>>>>> I tried adding the hbase protocol jar on spar-defaults.conf and in
>>>>>>> the driver-classpath as suggested here
>>>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>>>>>> no success.
>>>>>>> Any suggestions ?
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ted Yu <yu...@gmail.com>.
The class path formations on driver and executors are different.

Cheers

On Fri, Jan 22, 2016 at 3:25 PM, Ajinkya Kale <ka...@gmail.com> wrote:

> Is this issue only when the computations are in distributed mode ?
> If I do (pseudo code) :
> rdd.collect.call_to_hbase  I dont get this error,
>
> but if I do :
> rdd.call_to_hbase.collect it throws this error.
>
> On Wed, Jan 20, 2016 at 6:50 PM Ajinkya Kale <ka...@gmail.com>
> wrote:
>
>> Unfortunately I cannot at this moment (not a decision I can make) :(
>>
>> On Wed, Jan 20, 2016 at 6:46 PM Ted Yu <yu...@gmail.com> wrote:
>>
>>> I am not aware of a workaround.
>>>
>>> Can you upgrade to 0.98.4+ release ?
>>>
>>> Cheers
>>>
>>> On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ted,
>>>>
>>>> Thanks for responding.
>>>> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
>>>> HADOOP_CLASSPATH didnt work for me.
>>>>
>>>> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>>>>
>>>>> 0.98.0 didn't have fix from HBASE-11118
>>>>>
>>>>> Please upgrade your hbase version and try again.
>>>>>
>>>>> If still there is problem, please pastebin the stack trace.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> I have posted this on hbase user list but i thought makes more sense
>>>>>> on spark user list.
>>>>>> I am able to read the table in yarn-client mode from spark-shell but
>>>>>> I have exhausted all online forums for options to get it working in the
>>>>>> yarn-cluster mode through spark-submit.
>>>>>>
>>>>>> I am using this code-example
>>>>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>>>>> read a hbase table using Spark with the only change of adding the
>>>>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>>>>> hbase-site.xml.
>>>>>>
>>>>>> Spark 1.5.3
>>>>>>
>>>>>> HBase 0.98.0
>>>>>>
>>>>>>
>>>>>> Facing this error -
>>>>>>
>>>>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>>>>
>>>>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>>>>     at scala.Option.getOrElse(Option.scala:120)
>>>>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>>>>
>>>>>> I tried adding the hbase protocol jar on spar-defaults.conf and in
>>>>>> the driver-classpath as suggested here
>>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>>>>> no success.
>>>>>> Any suggestions ?
>>>>>>
>>>>>>
>>>>>
>>>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ajinkya Kale <ka...@gmail.com>.
Is this issue only when the computations are in distributed mode ?
If I do (pseudo code) :
rdd.collect.call_to_hbase  I dont get this error,

but if I do :
rdd.call_to_hbase.collect it throws this error.

On Wed, Jan 20, 2016 at 6:50 PM Ajinkya Kale <ka...@gmail.com> wrote:

> Unfortunately I cannot at this moment (not a decision I can make) :(
>
> On Wed, Jan 20, 2016 at 6:46 PM Ted Yu <yu...@gmail.com> wrote:
>
>> I am not aware of a workaround.
>>
>> Can you upgrade to 0.98.4+ release ?
>>
>> Cheers
>>
>> On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com>
>> wrote:
>>
>>> Hi Ted,
>>>
>>> Thanks for responding.
>>> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
>>> HADOOP_CLASSPATH didnt work for me.
>>>
>>> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>>>
>>>> 0.98.0 didn't have fix from HBASE-11118
>>>>
>>>> Please upgrade your hbase version and try again.
>>>>
>>>> If still there is problem, please pastebin the stack trace.
>>>>
>>>> Thanks
>>>>
>>>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> I have posted this on hbase user list but i thought makes more sense
>>>>> on spark user list.
>>>>> I am able to read the table in yarn-client mode from spark-shell but I
>>>>> have exhausted all online forums for options to get it working in the
>>>>> yarn-cluster mode through spark-submit.
>>>>>
>>>>> I am using this code-example
>>>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>>>> read a hbase table using Spark with the only change of adding the
>>>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>>>> hbase-site.xml.
>>>>>
>>>>> Spark 1.5.3
>>>>>
>>>>> HBase 0.98.0
>>>>>
>>>>>
>>>>> Facing this error -
>>>>>
>>>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>>>
>>>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>>>     at scala.Option.getOrElse(Option.scala:120)
>>>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>>>
>>>>> I tried adding the hbase protocol jar on spar-defaults.conf and in the
>>>>> driver-classpath as suggested here
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>>>> no success.
>>>>> Any suggestions ?
>>>>>
>>>>>
>>>>
>>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ajinkya Kale <ka...@gmail.com>.
Unfortunately I cannot at this moment (not a decision I can make) :(

On Wed, Jan 20, 2016 at 6:46 PM Ted Yu <yu...@gmail.com> wrote:

> I am not aware of a workaround.
>
> Can you upgrade to 0.98.4+ release ?
>
> Cheers
>
> On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com>
> wrote:
>
>> Hi Ted,
>>
>> Thanks for responding.
>> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
>> HADOOP_CLASSPATH didnt work for me.
>>
>> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>>
>>> 0.98.0 didn't have fix from HBASE-11118
>>>
>>> Please upgrade your hbase version and try again.
>>>
>>> If still there is problem, please pastebin the stack trace.
>>>
>>> Thanks
>>>
>>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
>>> wrote:
>>>
>>>>
>>>> I have posted this on hbase user list but i thought makes more sense on
>>>> spark user list.
>>>> I am able to read the table in yarn-client mode from spark-shell but I
>>>> have exhausted all online forums for options to get it working in the
>>>> yarn-cluster mode through spark-submit.
>>>>
>>>> I am using this code-example
>>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>>> read a hbase table using Spark with the only change of adding the
>>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>>> hbase-site.xml.
>>>>
>>>> Spark 1.5.3
>>>>
>>>> HBase 0.98.0
>>>>
>>>>
>>>> Facing this error -
>>>>
>>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>>
>>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>>     at scala.Option.getOrElse(Option.scala:120)
>>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>>
>>>> I tried adding the hbase protocol jar on spar-defaults.conf and in the
>>>> driver-classpath as suggested here
>>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>>> no success.
>>>> Any suggestions ?
>>>>
>>>>
>>>
>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ted Yu <yu...@gmail.com>.
I am not aware of a workaround.

Can you upgrade to 0.98.4+ release ?

Cheers

On Wed, Jan 20, 2016 at 6:26 PM, Ajinkya Kale <ka...@gmail.com> wrote:

> Hi Ted,
>
> Thanks for responding.
> Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
> HADOOP_CLASSPATH didnt work for me.
>
> On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:
>
>> 0.98.0 didn't have fix from HBASE-11118
>>
>> Please upgrade your hbase version and try again.
>>
>> If still there is problem, please pastebin the stack trace.
>>
>> Thanks
>>
>> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
>> wrote:
>>
>>>
>>> I have posted this on hbase user list but i thought makes more sense on
>>> spark user list.
>>> I am able to read the table in yarn-client mode from spark-shell but I
>>> have exhausted all online forums for options to get it working in the
>>> yarn-cluster mode through spark-submit.
>>>
>>> I am using this code-example
>>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>>> read a hbase table using Spark with the only change of adding the
>>> hbase.zookeeper.quorum through code as it is not picking it from the
>>> hbase-site.xml.
>>>
>>> Spark 1.5.3
>>>
>>> HBase 0.98.0
>>>
>>>
>>> Facing this error -
>>>
>>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>>
>>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>>     at scala.Option.getOrElse(Option.scala:120)
>>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>>
>>> I tried adding the hbase protocol jar on spar-defaults.conf and in the
>>> driver-classpath as suggested here
>>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>>> no success.
>>> Any suggestions ?
>>>
>>>
>>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ajinkya Kale <ka...@gmail.com>.
Hi Ted,

Thanks for responding.
Is there a work around for 0.98.0 ? Adding the hbase-protocol jar to
HADOOP_CLASSPATH didnt work for me.

On Wed, Jan 20, 2016 at 6:14 PM Ted Yu <yu...@gmail.com> wrote:

> 0.98.0 didn't have fix from HBASE-11118
>
> Please upgrade your hbase version and try again.
>
> If still there is problem, please pastebin the stack trace.
>
> Thanks
>
> On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com>
> wrote:
>
>>
>> I have posted this on hbase user list but i thought makes more sense on
>> spark user list.
>> I am able to read the table in yarn-client mode from spark-shell but I
>> have exhausted all online forums for options to get it working in the
>> yarn-cluster mode through spark-submit.
>>
>> I am using this code-example
>> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
>> read a hbase table using Spark with the only change of adding the
>> hbase.zookeeper.quorum through code as it is not picking it from the
>> hbase-site.xml.
>>
>> Spark 1.5.3
>>
>> HBase 0.98.0
>>
>>
>> Facing this error -
>>
>>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>>
>>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>>     at scala.Option.getOrElse(Option.scala:120)
>>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>>
>> I tried adding the hbase protocol jar on spar-defaults.conf and in the
>> driver-classpath as suggested here
>> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
>> no success.
>> Any suggestions ?
>>
>>
>

Re: HBase 0.98.0 with Spark 1.5.3 issue in yarn-cluster mode

Posted by Ted Yu <yu...@gmail.com>.
0.98.0 didn't have fix from HBASE-11118

Please upgrade your hbase version and try again.

If still there is problem, please pastebin the stack trace.

Thanks

On Wed, Jan 20, 2016 at 5:41 PM, Ajinkya Kale <ka...@gmail.com> wrote:

>
> I have posted this on hbase user list but i thought makes more sense on
> spark user list.
> I am able to read the table in yarn-client mode from spark-shell but I
> have exhausted all online forums for options to get it working in the
> yarn-cluster mode through spark-submit.
>
> I am using this code-example
> http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase to
> read a hbase table using Spark with the only change of adding the
> hbase.zookeeper.quorum through code as it is not picking it from the
> hbase-site.xml.
>
> Spark 1.5.3
>
> HBase 0.98.0
>
>
> Facing this error -
>
>  16/01/20 12:56:59 WARN client.ConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table:
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59 GMT-07:00 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e, java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString
>
>     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>     at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
>     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
>     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
>     at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
>     at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
>     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
>     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
>     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
>     at scala.Option.getOrElse(Option.scala:120)
>     at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
>     at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
>     at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
>
> I tried adding the hbase protocol jar on spar-defaults.conf and in the
> driver-classpath as suggested here
> http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html but
> no success.
> Any suggestions ?
>
>