You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2022/07/14 13:46:00 UTC

[jira] [Work logged] (HIVE-26394) Query based compaction fails for table with more than 6 columns

     [ https://issues.apache.org/jira/browse/HIVE-26394?focusedWorklogId=790959&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-790959 ]

ASF GitHub Bot logged work on HIVE-26394:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Jul/22 13:45
            Start Date: 14/Jul/22 13:45
    Worklog Time Spent: 10m 
      Work Description: maheshk114 opened a new pull request, #3442:
URL: https://github.com/apache/hive/pull/3442

   
   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/Hive/HowToContribute
     2. Ensure that you have created an issue on the Hive project JIRA: https://issues.apache.org/jira/projects/HIVE/summary
     3. Ensure you have added or run the appropriate tests for your PR: 
     4. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP]HIVE-XXXXX:  Your PR title ...'.
     5. Be sure to keep the PR description updated to reflect all changes.
     6. Please write your PR title to summarize what this PR proposes.
     7. If possible, provide a concise example to reproduce the issue for a faster review.
   
   -->
   
   ### What changes were proposed in this pull request?
   During ORC reader creation use the split info instead of file schema to decide if it is an original file or fully ACID file.
   
   
   ### Why are the changes needed?
   To fix issue related to query based compaction. Query based compaction creates an external table pointing to the location of  a ACID table. This causes the queries to take the full ACID path instead of external (original) file path even though the split is generated as external table.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   Unit test
   




Issue Time Tracking
-------------------

            Worklog Id:     (was: 790959)
    Remaining Estimate: 0h
            Time Spent: 10m

> Query based compaction fails for table with more than 6 columns
> ---------------------------------------------------------------
>
>                 Key: HIVE-26394
>                 URL: https://issues.apache.org/jira/browse/HIVE-26394
>             Project: Hive
>          Issue Type: Bug
>          Components: Hive, HiveServer2
>            Reporter: mahesh kumar behera
>            Assignee: mahesh kumar behera
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Query based compaction creates a temp external table with location pointing to the location of the table being compacted. So this external table has file of ACID type. When query is done on this table, the table type is decided by reading the files present at the table location. As the table location has files compatible to ACID format, it is assuming it to be ACID table. This is causing issue while generating the SARG columns as the column number does not match with the schema.
>  
> {code:java}
> Error doing query based minor compaction
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed to run INSERT into table delta_cara_pn_tmp_compactor_clean_1656061070392_result select `operation`, `originalTransaction`, `bucket`, `rowId`, `currentTransaction`, `row` from delta_clean_1656061070392 where `originalTransaction` not in (749,750,766,768,779,783,796,799,818,1145,1149,1150,1158,1159,1160,1165,1166,1169,1173,1175,1176,1871,9631)
> 	at org.apache.hadoop.hive.ql.DriverUtils.runOnDriver(DriverUtils.java:73)
> 	at org.apache.hadoop.hive.ql.txn.compactor.QueryCompactor.runCompactionQueries(QueryCompactor.java:138)
> 	at org.apache.hadoop.hive.ql.txn.compactor.MinorQueryCompactor.runCompaction(MinorQueryCompactor.java:70)
> 	at org.apache.hadoop.hive.ql.txn.compactor.Worker.findNextCompactionAndExecute(Worker.java:498)
> 	at org.apache.hadoop.hive.ql.txn.compactor.Worker.lambda$run$0(Worker.java:120)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:750)
> Caused by: (responseCode = 2, errorMessage = FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1656061159324_0000_1_00, diagnostics=[Task failed, taskId=task_1656061159324_0000_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1656061159324_0000_1_00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 6
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:348)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:277)
> 	at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:75)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:62)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:62)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:38)
> 	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> 	at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:750)
> Caused by: java.lang.RuntimeException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 6
> 	at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
> 	at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:145)
> 	at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
> 	at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
> 	at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
> 	at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:706)
> 	at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:665)
> 	at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
> 	at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
> 	at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543)
> 	at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:293)
> 	... 15 more
> Caused by: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 6
> 	at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
> 	at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
> 	at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:472)
> 	at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
> 	... 26 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 6
> 	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSargColumnNames(OrcInputFormat.java:551)
> 	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.setSearchArgument(OrcInputFormat.java:577)
> 	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.createReaderFromFile(OrcInputFormat.java:366)
> 	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.<init>(OrcInputFormat.java:276)
> 	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:2033)
> 	at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72)
> 	at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:463)
> 	... 27 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)