You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2015/08/05 00:03:05 UTC

[jira] [Updated] (HBASE-13825) Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name

     [ https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Purtell updated HBASE-13825:
-----------------------------------
    Summary: Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name  (was: Get operations on large objects fail with protocol errors)

> Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-13825
>                 URL: https://issues.apache.org/jira/browse/HBASE-13825
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.0.0, 1.0.1
>            Reporter: Dev Lakhani
>            Assignee: Andrew Purtell
>             Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
>         Attachments: HBASE-13825-0.98.patch, HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
>         at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
>         at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>         at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>         at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
>         at org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
>         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
>         at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)