You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Naresh P R (Jira)" <ji...@apache.org> on 2023/02/28 20:46:00 UTC

[jira] [Updated] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls

     [ https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Naresh P R updated HIVE-27114:
------------------------------
    Summary: Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls  (was: Provide a configurable filter for removing useless properties from PartitionDesc objects from getPartitions HMS Calls)

> Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-27114
>                 URL: https://issues.apache.org/jira/browse/HIVE-27114
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Naresh P R
>            Priority: Major
>
> HMS API calls are throwing following exception because of thrift upgrade
>  
> {code:java}
> org.apache.thrift.transport.TTransportException: MaxMessageSize reached
>         at org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) 
>         at org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) 
>         at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) 
>         at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) 
>         at org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) 
>         at org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) 
>         at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) 
>         at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) 
>         at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) 
>         at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) 
>         at org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) 
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) 
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) 
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) 
>         at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) 
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) 
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275)  {code}
>  
>  
> Large size partition metadata is causing this issue
> eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = (impala_intermediate_stats_chunk*){*}, these PARTITION_PARAM is not required for Hive. These params should be skipped while preparing partition object from HMS to HS2.
> Similarly any user defined regex should be skipped in getPartitions HMS API call. Similar to HIVE-25501
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)