You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/11/09 17:22:00 UTC

[jira] [Work logged] (HIVE-24313) Optimise stats collection for file sizes on cloud storage

     [ https://issues.apache.org/jira/browse/HIVE-24313?focusedWorklogId=509264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509264 ]

ASF GitHub Bot logged work on HIVE-24313:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Nov/20 17:21
            Start Date: 09/Nov/20 17:21
    Worklog Time Spent: 10m 
      Work Description: vineetgarg02 commented on pull request #1636:
URL: https://github.com/apache/hive/pull/1636#issuecomment-724154844


   @rbalamohan can you take a look please?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 509264)
    Remaining Estimate: 0h
            Time Spent: 10m

> Optimise stats collection for file sizes on cloud storage
> ---------------------------------------------------------
>
>                 Key: HIVE-24313
>                 URL: https://issues.apache.org/jira/browse/HIVE-24313
>             Project: Hive
>          Issue Type: Improvement
>          Components: HiveServer2
>            Reporter: Rajesh Balamohan
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> When stats information is not present (e.g external table), RelOptHiveTable computes basic stats at runtime.
> Following is the codepath.
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java#L598]
> {code:java}
> Statistics stats = StatsUtils.collectStatistics(hiveConf, partitionList,
>                 hiveTblMetadata, hiveNonPartitionCols, nonPartColNamesThatRqrStats, colStatsCached,
>                 nonPartColNamesThatRqrStats, true);
>  {code}
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L322]
> {code:java}
> for (Partition p : partList.getNotDeniedPartns()) {
>         BasicStats basicStats = basicStatsFactory.build(Partish.buildFor(table, p));
>         partStats.add(basicStats);
>       }
>  {code}
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStats.java#L205]
>  
> {code:java}
> try {
>             ds = getFileSizeForPath(path);
>           } catch (IOException e) {
>             ds = 0L;
>           }
>  {code}
>  
> For a table & query with large number of partitions, this takes long time to compute statistics and increases compilation time.  It would be good to fix it with "ForkJoinPool" ( partList.getNotDeniedPartns().parallelStream().forEach((p) )
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)