You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2024/01/19 18:22:26 UTC

(spark) branch master updated: [SPARK-46774][SQL][AVRO] Use mapreduce.output.fileoutputformat.compress instead of deprecated mapred.output.compress in Avro write jobs

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 3d395a6b874b [SPARK-46774][SQL][AVRO] Use mapreduce.output.fileoutputformat.compress instead of deprecated mapred.output.compress in Avro write jobs
3d395a6b874b is described below

commit 3d395a6b874bfd4609323551773c61d42fb60b8a
Author: Kent Yao <ya...@apache.org>
AuthorDate: Fri Jan 19 10:22:13 2024 -0800

    [SPARK-46774][SQL][AVRO] Use mapreduce.output.fileoutputformat.compress instead of deprecated mapred.output.compress in Avro write jobs
    
    ### What changes were proposed in this pull request?
    
    According to [DeprecatedProperties](https://hadoop.apache.org/docs/r3.3.6/hadoop-project-dist/hadoop-common/DeprecatedProperties.html), `mapred.output.compress` is deprecated. So in this PR, we use `mapreduce.output.fileoutputformat.compress` .
    
    ### Why are the changes needed?
    
    remove usage of deprecated Hadoop configurations
    
    ### Does this PR introduce _any_ user-facing change?
    
    no
    
    ### How was this patch tested?
    
    I tested locally by verifying the compressed output files before and after this change.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    no
    
    Closes #44799 from yaooqinn/SPARK-46774.
    
    Authored-by: Kent Yao <ya...@apache.org>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala     | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala b/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala
index f0b70f09aa55..05562c913b19 100644
--- a/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala
+++ b/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroUtils.scala
@@ -107,9 +107,9 @@ private[sql] object AvroUtils extends Logging {
         val jobConf = job.getConfiguration
         AvroCompressionCodec.fromString(codecName) match {
           case UNCOMPRESSED =>
-            jobConf.setBoolean("mapred.output.compress", false)
+            jobConf.setBoolean("mapreduce.output.fileoutputformat.compress", false)
           case compressed =>
-            jobConf.setBoolean("mapred.output.compress", true)
+            jobConf.setBoolean("mapreduce.output.fileoutputformat.compress", true)
             jobConf.set(AvroJob.CONF_OUTPUT_CODEC, compressed.getCodecName)
             if (compressed.getSupportCompressionLevel) {
               val level = sqlConf.getConfString(s"spark.sql.avro.$codecName.level",


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org