You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Norris Lee (JIRA)" <ji...@apache.org> on 2017/02/01 19:46:51 UTC

[jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks

     [ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Norris Lee updated HIVE-14901:
------------------------------
    Status: In Progress  (was: Patch Available)

> HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
> --------------------------------------------------------------------------------
>
>                 Key: HIVE-14901
>                 URL: https://issues.apache.org/jira/browse/HIVE-14901
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC, ODBC
>    Affects Versions: 2.1.0
>            Reporter: Vaibhav Gumashta
>            Assignee: Norris Lee
>         Attachments: HIVE-14901.patch
>
>
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide the max number of rows that we write in tasks. However, we should ideally use the user supplied value (which can be extracted from the ThriftCLIService.FetchResults' request parameter) to decide how many rows to serialize in a blob in the tasks. We should however use {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on it, so that we don't go OOM in tasks and HS2. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)