You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by mgaido91 <gi...@git.apache.org> on 2018/08/17 08:39:25 UTC

[GitHub] spark pull request #21950: [SPARK-24914][SQL][WIP] Add configuration to avoi...

Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21950#discussion_r210841402
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala ---
    @@ -76,4 +78,16 @@ private[sql] object PruneFileSourcePartitions extends Rule[LogicalPlan] {
             op
           }
       }
    +
    +  private def calcPartSize(catalogTable: Option[CatalogTable], sizeInBytes: Long): Long = {
    +    val conf: SQLConf = SQLConf.get
    +    val factor = conf.sizeDeserializationFactor
    +    if (catalogTable.isDefined && factor != 1.0 &&
    +      // TODO: The serde check should be in a utility function, since it is also checked elsewhere
    +      catalogTable.get.storage.serde.exists(s => s.contains("Parquet") || s.contains("Orc"))) {
    --- End diff --
    
    I am not sure about this. We are saying that only Parquet/ORC files can be compressed?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org