You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Hive QA (JIRA)" <ji...@apache.org> on 2017/02/01 23:55:51 UTC

[jira] [Commented] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks

    [ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15849140#comment-15849140 ] 

Hive QA commented on HIVE-14901:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12850490/HIVE-14901.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 11021 tests executed
*Failed tests:*
{noformat}
TestDerbyConnector - did not produce a TEST-*.xml file (likely timed out) (batchId=235)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys] (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_char_simple] (batchId=147)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=223)
org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth.org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth (batchId=218)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthUDFBlacklist.testBlackListedUdfUsage (batchId=217)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization.testAllowedCommands (batchId=218)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization.testAuthorization1 (batchId=218)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization.testBlackListedUdfUsage (batchId=218)
org.apache.hive.jdbc.authorization.TestJdbcWithSQLAuthorization.testConfigWhiteList (batchId=218)
org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.testAuthorization1 (batchId=229)
org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.testAuthorization1 (batchId=229)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3311/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3311/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3311/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12850490 - PreCommit-HIVE-Build

> HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
> --------------------------------------------------------------------------------
>
>                 Key: HIVE-14901
>                 URL: https://issues.apache.org/jira/browse/HIVE-14901
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC, ODBC
>    Affects Versions: 2.1.0
>            Reporter: Vaibhav Gumashta
>            Assignee: Norris Lee
>         Attachments: HIVE-14901.1.patch, HIVE-14901.patch
>
>
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide the max number of rows that we write in tasks. However, we should ideally use the user supplied value (which can be extracted from the ThriftCLIService.FetchResults' request parameter) to decide how many rows to serialize in a blob in the tasks. We should however use {{hive.server2.thrift.resultset.max.fetch.size}} to have an upper bound on it, so that we don't go OOM in tasks and HS2. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)