You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Eugene Koifman (JIRA)" <ji...@apache.org> on 2017/10/16 21:41:00 UTC

[jira] [Updated] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files

     [ https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eugene Koifman updated HIVE-17458:
----------------------------------
    Priority: Critical  (was: Major)

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---------------------------------------------------------------
>
>                 Key: HIVE-17458
>                 URL: https://issues.apache.org/jira/browse/HIVE-17458
>             Project: Hive
>          Issue Type: Improvement
>    Affects Versions: 2.2.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>            Priority: Critical
>
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This will likely look like a perf regression when converting a table from non-acid to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops will not vectorize until major compaction.  
> There is no reason why this should be the case.  Just like OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other files in the logical tranche/bucket and calculate the offset for the RowBatch of the split.  (Presumably getRecordReader().getRowNumber() works the same in vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer it from file path... which in particular simplifies OrcInputFormat.determineSplitStrategies()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)