You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Vaibhav Gumashta (JIRA)" <ji...@apache.org> on 2016/01/17 00:38:39 UTC

[jira] [Comment Edited] (HIVE-12049) Provide an option to write serialized thrift objects in final tasks

    [ https://issues.apache.org/jira/browse/HIVE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15103489#comment-15103489 ] 

Vaibhav Gumashta edited comment on HIVE-12049 at 1/16/16 11:38 PM:
-------------------------------------------------------------------

A wip patch (this one has the new serde) while I'm cleaning up local commits to generate an end to end patch. I think it'll be easier to run UT if this jira and HIVE-12428 are merged eventually, however keeping them separate might be easier for review.

[~rohitdholakia] [~thejas] what do you think? 


was (Author: vgumashta):
A wip patch while I'm cleaning up local commits to generate an end to end patch. I think it'll be easier to run UT if this jira and HIVE-12428 are merged eventually, however keeping them separate might be easier for review.

[~rohitdholakia] [~thejas] what do you think? 

> Provide an option to write serialized thrift objects in final tasks
> -------------------------------------------------------------------
>
>                 Key: HIVE-12049
>                 URL: https://issues.apache.org/jira/browse/HIVE-12049
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>            Reporter: Rohit Dholakia
>            Assignee: Rohit Dholakia
>         Attachments: HIVE-12049.1.patch
>
>
> For each fetch request to HiveServer2, we pay the penalty of deserializing the row objects and translating them into a different representation suitable for the RPC transfer. In a moderate to high concurrency scenarios, this can result in significant CPU and memory wastage. By having each task write the appropriate thrift objects to the output files, HiveServer2 can simply stream a batch of rows on the wire without incurring any of the additional cost of deserialization and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator can use to write thrift formatted row batches to the output file. Using the pluggable property of the {{hive.query.result.fileformat}}, we can set it to use SequenceFile and write a batch of thrift formatted rows as a value blob. The FetchTask can now simply read the blob and send it over the wire. On the client side, the *DBC driver can read the blob and since it is already formatted in the way it expects, it can continue building the ResultSet the way it does in the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)