You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Hive QA (JIRA)" <ji...@apache.org> on 2014/09/24 20:43:34 UTC
[jira] [Commented] (HIVE-8233) multi-table insertion doesn't work
with ForwardOperator [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146715#comment-14146715 ]
Hive QA commented on HIVE-8233:
-------------------------------
{color:red}Overall{color}: -1 at least one tests failed
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12670840/HIVE-8233.1-spark.patch
{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6503 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
org.apache.hadoop.hive.ql.parse.TestParse.testParse_union
{noformat}
Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/150/testReport
Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/150/console
Test logs: http://ec2-54-176-176-199.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-150/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12670840
> multi-table insertion doesn't work with ForwardOperator [Spark Branch]
> ----------------------------------------------------------------------
>
> Key: HIVE-8233
> URL: https://issues.apache.org/jira/browse/HIVE-8233
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Reporter: Chao
> Assignee: Chao
> Attachments: HIVE-8233.1-spark.patch
>
>
> Right now, for multi-table insertion, we will start from multiple FileSinkOperators, and break from their lowest common ancestor, adding temporary FileSinkOperator and TableScanOperators. A special case is when the LCA is a ForwardOperator, in which case we don't break it, since it's already been optimized.
> However, there's a issue, considering the following plan:
> {noformat}
> ...
> RS_0
> |
> FOR
> |
> / \
> GBY_1 GBY_2
> | |
> ... ...
> | |
> RS_1 RS_2
> | |
> ... ...
> | |
> FS_1 FS_2
> {noformat}
> which may result to:
> {noformat}
> RW
> / \
> RW RW
> {noformat}
> Hence, because of the issue in HIVE-7731 and HIVE-8118, both downstream branches will get duplicated (and same) input.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)