You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2017/07/26 00:04:10 UTC

[jira] [Closed] (HIVE-14920) S3: Optimize SimpleFetchOptimizer::checkThreshold()

     [ https://issues.apache.org/jira/browse/HIVE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley closed HIVE-14920.
--------------------------------

> S3: Optimize SimpleFetchOptimizer::checkThreshold()
> ---------------------------------------------------
>
>                 Key: HIVE-14920
>                 URL: https://issues.apache.org/jira/browse/HIVE-14920
>             Project: Hive
>          Issue Type: Improvement
>    Affects Versions: 2.2.0
>            Reporter: Rajesh Balamohan
>            Assignee: Rajesh Balamohan
>            Priority: Minor
>             Fix For: 2.2.0
>
>         Attachments: HIVE-14920.1.patch, HIVE-14920.2.patch
>
>
> Query: Simple query like the following takes lot longer time in query compilation phase (~330 seconds for 200 GB dataset with tpc-ds)
> {noformat}
> select ws_item_sk from web_sales where ws_item_sk > 10 limit 10;
> {noformat}
> This enables {{SimpleFetchOptimizer}} which internally tries to figure out if the size of the data is within the threshold defined in {{hive.fetch.task.conversion.threshold}} ~1GB.
> This turns out to be super expensive when the dataset is partitioned. E.g stacktrace is given below. Note that this happens in client side and tries to get the length for 1800+ partitions before proceeding to next rule.
> {noformat}
>         at org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1486)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.getFileLength(SimpleFetchOptimizer.java:466)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.calculateLength(SimpleFetchOptimizer.java:451)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.getInputLength(SimpleFetchOptimizer.java:423)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.access$300(SimpleFetchOptimizer.java:323)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.checkThreshold(SimpleFetchOptimizer.java:168)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.optimize(SimpleFetchOptimizer.java:133)
>         at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.transform(SimpleFetchOptimizer.java:105)
>         at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:207)
>         at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10466)
>         at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:216)
>         at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:230)
>         at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
>         at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:230)
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:464)
>         at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:320)
>         at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1219)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1260)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1146)
>         at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:217)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
>         at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:740)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)