You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by ravipesala <gi...@git.apache.org> on 2017/12/04 12:45:52 UTC

[GitHub] carbondata pull request #1521: [WIP] [CARBONDATA-1743] fix conurrent pre-agg...

Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1521#discussion_r154637843
  
    --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala ---
    @@ -197,8 +197,17 @@ case class CarbonPreAggregateQueryRules(sparkSession: SparkSession) extends Rule
                     .asInstanceOf[LogicalRelation]
                   (selectedDataMapSchema, carbonRelation)
                 }.minBy(f => f._2.relation.asInstanceOf[CarbonDatasourceHadoopRelation].sizeInBytes)
    -          // transform the query plan based on selected child schema
    -          transformPreAggQueryPlan(plan, aggDataMapSchema, carbonRelation)
    +          if (carbonRelation.relation.asInstanceOf[CarbonDatasourceHadoopRelation].sizeInBytes ==
    --- End diff --
    
    While calculating `sizeInBytes` in CarbonRelation we can first check the size of the valid segments and then calculate the store size of that table. 


---