You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Dmitry Goldenberg (JIRA)" <ji...@apache.org> on 2015/04/27 19:20:39 UTC

[jira] [Comment Edited] (PHOENIX-1926) Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString

    [ https://issues.apache.org/jira/browse/PHOENIX-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14514457#comment-14514457 ] 

Dmitry Goldenberg edited comment on PHOENIX-1926 at 4/27/15 5:20 PM:
---------------------------------------------------------------------

Nick, thanks for the links.

So, [HBASE-10877|https://issues.apache.org/jira/browse/HBASE-10877] says that
{quote}
As of HBASE-11118 we have a solution such that adding hbase-protocol.jar to the launch classpath is no longer necessary. This fix will be shipped in 0.98.4.
{quote}
We're running HBase 0.98.9 built for Hadoop 2 and are still seeing the issue. Is there possibly a regression issue? What are some workaround avenues then?

Some approaches were discussed in [HBASE-10304|https://issues.apache.org/jira/browse/HBASE-10304]:
{quote}
Solution
In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath. This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location.
This can also be achieved on a per-job launch basis by specifying a value for HADOOP_CLASSPATH at job submission time. All three of the following job launching commands satisfy this requirement:
$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
{quote}

I have tried setting HADOOP_CLASSPATH which pointed at the hbase-protocol jar which did not fix the issue.

Do we want to include a reference to the hbase-protocol.jar in hadoop's lib directory?
If so, what are the implications on the Spark job:
a) does the Spark job jar need to have the HBase and Hadoop dependency classes of Apache Phoenix rolled into it?
b) in a clustered execution, how will the executor find the HBase protocol jar?  If the Spark job is running on a slave machine within a Hadoop cluster, does that mean that we'll need to drop the hbase-protocol jar into the hadoop installation's lib directory on all slave machines?



was (Author: dgoldenberg):
Nick, thanks for the links.

So, [HBASE-10877|https://issues.apache.org/jira/browse/HBASE-10877] says that
{quote}
As of HBASE-11118 we have a solution such that adding hbase-protocol.jar to the launch classpath is no longer necessary. This fix will be shipped in 0.98.4.
{quote}
We're running HBase 0.98.9 built for Hadoop 2 and are still seeing the issue.  What are some workaround avenues then?

Some approaches were discussed in [HBASE-10304|https://issues.apache.org/jira/browse/HBASE-10304]:
{quote}
Solution
In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath. This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location.
This can also be achieved on a per-job launch basis by specifying a value for HADOOP_CLASSPATH at job submission time. All three of the following job launching commands satisfy this requirement:
$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase mapredcp) hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass
{quote}

I have tried setting HADOOP_CLASSPATH which pointed at the hbase-protocol jar which did not fix the issue.

Do we want to include a reference to the hbase-protocol.jar in hadoop's lib directory?
If so, what are the implications on the Spark job:
a) does the Spark job jar need to have the HBase and Hadoop dependency classes of Apache Phoenix rolled into it?
b) in a clustered execution, how will the executor find the HBase protocol jar?  If the Spark job is running on a slave machine within a Hadoop cluster, does that mean that we'll need to drop the hbase-protocol jar into the hadoop installation's lib directory on all slave machines?


> Attempt to update a record in HBase via Phoenix from a Spark job causes java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-1926
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1926
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.3.1
>         Environment: centos  x86_64 GNU/Linux
> Phoenix: 4.3.1 custom built for HBase 0.98.9, Hadoop 2.4.0
> HBase: 0.98.9-hadoop2
> Hadoop: 2.4.0
> Spark: spark-1.3.0-bin-hadoop2.4
>            Reporter: Dmitry Goldenberg
>            Priority: Critical
>
> Performing an UPSERT from within a Spark job, 
> UPSERT INTO items(entry_id, prop1, pop2) VALUES(?, ?, ?)
> causes
> 15/04/26 18:22:06 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row \x00\x00ITEMS
> java.util.concurrent.ExecutionException: java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1007)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311)
>         at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307)
>         at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333)
>         at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:237)
>         at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:231)
>         at org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:207)
>         at org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:248)
>         at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:503)
>         at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:494)
>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>         at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:288)
>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>         at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:287)
>         at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:219)
>         at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:174)
>         at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:179)
> ...........................................
> Caused by: java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString cannot access its superclass com.google.protobuf.LiteralByteString
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>         at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1265)
>         at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1258)
>         at org.apache.hadoop.hbase.client.HTable$17.call(HTable.java:1608)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)