You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Hive QA (JIRA)" <ji...@apache.org> on 2017/06/27 19:12:00 UTC

[jira] [Commented] (HIVE-16970) General Improvements To org.apache.hadoop.hive.metastore.cache.CacheUtils

    [ https://issues.apache.org/jira/browse/HIVE-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16065325#comment-16065325 ] 

Hive QA commented on HIVE-16970:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12874607/HIVE-16970.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 10850 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=150)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query16] (batchId=233)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query94] (batchId=233)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testBootstrapFunctionReplication (batchId=217)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testCreateFunctionIncrementalReplication (batchId=217)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testCreateFunctionWithFunctionBinaryJarsOnHDFS (batchId=217)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema (batchId=178)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema (batchId=178)
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation (batchId=178)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout (batchId=227)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testConnection (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValid (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValidNeg (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeProxyAuth (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeTokenAuth (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testProxyAuth (batchId=239)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=239)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/5786/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/5786/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-5786/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12874607 - PreCommit-HIVE-Build

> General Improvements To org.apache.hadoop.hive.metastore.cache.CacheUtils
> -------------------------------------------------------------------------
>
>                 Key: HIVE-16970
>                 URL: https://issues.apache.org/jira/browse/HIVE-16970
>             Project: Hive
>          Issue Type: Improvement
>          Components: Metastore
>    Affects Versions: 3.0.0
>            Reporter: BELUGA BEHR
>            Assignee: BELUGA BEHR
>            Priority: Trivial
>         Attachments: HIVE-16970.1.patch
>
>
> # Simplify
> # Do not initiate empty collections
> # Parsing is incorrect:
> {code:title=org.apache.hadoop.hive.metastore.cache.CacheUtils}
>   public static String buildKey(String dbName, String tableName, List<String> partVals) {
>     String key = buildKey(dbName, tableName);
>     if (partVals == null || partVals.size() == 0) {
>       return key;
>     }
>     // missing a delimiter between the "tableName" and the first "partVal"
>     for (int i = 0; i < partVals.size(); i++) {
>       key += partVals.get(i);
>       if (i != partVals.size() - 1) {
>         key += delimit;
>       }
>     }
>     return key;
>   }
> public static Object[] splitPartitionColStats(String key) {
> // ...
> }
> {code}
> The result of passing the key to the "split" method is:
> {code}
> buildKey("db","Table",["Part1","Part2","Part3"], "col");
> [db, tablePart1, [Part2, Part3], col]
> // "table" and "Part1" is mistakenly concatenated
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)