You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Gopal V (JIRA)" <ji...@apache.org> on 2019/03/19 06:16:00 UTC

[jira] [Commented] (HIVE-21470) ACID: Optimize RecordReader creation when SearchArgument is provided

    [ https://issues.apache.org/jira/browse/HIVE-21470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795737#comment-16795737 ] 

Gopal V commented on HIVE-21470:
--------------------------------

I think the real problem is 

{code}
OrcRawRecordMerger.<init>(Configuration, boolean, Reader, boolean, int, ValidWriteIdList, Reader$Options, Path[], OrcRawRecordMerger$Options) line: 1057	
{code}

The RecordMerger is somewhat messed up in ACIDv2, because it is entirely useless in a lot of cases.

> ACID: Optimize RecordReader creation when SearchArgument is provided
> --------------------------------------------------------------------
>
>                 Key: HIVE-21470
>                 URL: https://issues.apache.org/jira/browse/HIVE-21470
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 3.1.1, 2.3.4
>            Reporter: Vaibhav Gumashta
>            Priority: Major
>
> Consider the following query:
> {code}
> select col1 from tbl1 where year_partition=2019;
> {code}
>  
> If the table has a lot of columns, currently we end up creating a TreeReader for each column, even when it won't pass the SearchArgument:
> {code}
> TreeReaderFactory.createTreeReader(TypeDescription, TreeReaderFactory$Context) line: 2339	
> TreeReaderFactory$StructTreeReader.<init>(int, TypeDescription, TreeReaderFactory$Context) line: 1974	
> TreeReaderFactory.createTreeReader(TypeDescription, TreeReaderFactory$Context) line: 2390	
> RecordReaderImpl(RecordReaderImpl).<init>(ReaderImpl, Reader$Options) line: 267	
> RecordReaderImpl.<init>(ReaderImpl, Reader$Options, Configuration) line: 67	
> ReaderImpl.rowsOptions(Reader$Options, Configuration) line: 83	
> OrcRawRecordMerger$OriginalReaderPairToRead.<init>(OrcRawRecordMerger$ReaderKey, Reader, int, RecordIdentifier, RecordIdentifier, Reader$Options, OrcRawRecordMerger$Options, Configuration, ValidWriteIdList, int) line: 446	
> OrcRawRecordMerger.<init>(Configuration, boolean, Reader, boolean, int, ValidWriteIdList, Reader$Options, Path[], OrcRawRecordMerger$Options) line: 1057	
> OrcInputFormat.getReader(InputSplit, Options) line: 2108	
> OrcInputFormat.getRecordReader(InputSplit, JobConf, Reporter) line: 2006	
> FetchOperator$FetchInputFormatSplit.getRecordReader(JobConf) line: 776	
> {code}
> If the table has 1000 column, and spans N splits, we will end up creating 1000*N TreeReader objects when we might need only N (1/split).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)