You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Adam Szita (JIRA)" <ji...@apache.org> on 2018/08/07 13:42:00 UTC

[jira] [Created] (HIVE-20330) HCatLoader cannot handle multiple InputJobInfo objects for a job with multiple inputs

Adam Szita created HIVE-20330:
---------------------------------

             Summary: HCatLoader cannot handle multiple InputJobInfo objects for a job with multiple inputs
                 Key: HIVE-20330
                 URL: https://issues.apache.org/jira/browse/HIVE-20330
             Project: Hive
          Issue Type: Bug
          Components: HCatalog
            Reporter: Adam Szita
            Assignee: Adam Szita


While running performance tests on Pig (0.12 and 0.17) we've observed a huge performance drop in a workload that has multiple inputs from HCatLoader.

The reason is that for a particular MR job with multiple Hive tables as input, Pig calls {{setLocation}} on each {{LoaderFunc (HCatLoader)}} instance but only one table's information (InputJobInfo instance) gets tracked in the JobConf. (This is under config key {{HCatConstants.HCAT_KEY_JOB_INFO}}).

Any such call overwrites preexisting values, and thus only the last table's information will be considered when Pig calls {{getStatistics}} to calculate and estimate required reducer count.

In cases when there are 2 input tables, 256GB and 1MB in size respectively, Pig will query the size information from HCat for both of them, but it will either see 1MB+1MB=2MB or 256GB+256GB=0.5TB depending on input order in the execution plan's DAG.
It should of course see 256.00097GB in total and use 257 reducers by default accordingly.

In unlucky cases this will be 2MB and 1 reducer will have to struggle with 256GB...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)