You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by leachbj <gi...@git.apache.org> on 2018/08/07 04:16:22 UTC

[GitHub] spark pull request #16898: [SPARK-19563][SQL] avoid unnecessary sort in File...

Github user leachbj commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16898#discussion_r208094538
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala ---
    @@ -119,23 +130,45 @@ object FileFormatWriter extends Logging {
           uuid = UUID.randomUUID().toString,
           serializableHadoopConf = new SerializableConfiguration(job.getConfiguration),
           outputWriterFactory = outputWriterFactory,
    -      allColumns = queryExecution.logical.output,
    -      partitionColumns = partitionColumns,
    +      allColumns = allColumns,
           dataColumns = dataColumns,
    -      bucketSpec = bucketSpec,
    +      partitionColumns = partitionColumns,
    +      bucketIdExpression = bucketIdExpression,
           path = outputSpec.outputPath,
           customPartitionLocations = outputSpec.customPartitionLocations,
           maxRecordsPerFile = options.get("maxRecordsPerFile").map(_.toLong)
             .getOrElse(sparkSession.sessionState.conf.maxRecordsPerFile)
         )
     
    +    // We should first sort by partition columns, then bucket id, and finally sorting columns.
    +    val requiredOrdering = partitionColumns ++ bucketIdExpression ++ sortColumns
    +    // the sort order doesn't matter
    +    val actualOrdering = queryExecution.executedPlan.outputOrdering.map(_.child)
    --- End diff --
    
    @cloud-fan would it be possible to use the logical plan rather than the executedPlan?  If the optimizer decides the data is already sorted according according to the logical plan the executedPlan won't include the fields.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org