You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Karthik Palaniappan (JIRA)" <ji...@apache.org> on 2017/12/28 00:03:00 UTC

[jira] [Created] (MAPREDUCE-7029) FileOutputCommitter#commitTask should delete task directory

Karthik Palaniappan created MAPREDUCE-7029:
----------------------------------------------

             Summary: FileOutputCommitter#commitTask should delete task directory
                 Key: MAPREDUCE-7029
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7029
             Project: Hadoop Map/Reduce
          Issue Type: Improvement
    Affects Versions: 2.8.2
         Environment: - Google Cloud Storage (with the GCS connector: https://github.com/GoogleCloudPlatform/bigdata-interop/tree/master/gcs) for HCFS compatibility.

- FileOutputCommitter algorithm v2.

- Running on Google Compute Engine with Java 8, Debian 8, Hadoop 2.8.2, Spark 2.2.0.
            Reporter: Karthik Palaniappan
            Priority: Minor


I ran a Spark job that outputs thousands of parquet files (aka there are thousands of reducers), and it hung for several minutes in the driver after all tasks were complete. Here is a very simple repro of the job (to be run in a spark-shell):

{code:scala}
spark.range(1L << 20).repartition(1 << 14).write.save("gs://some/path")
{code}

Spark actually calls into Mapreduce's FileOuputCommitter. Job committing (specifically cleanupJob()) recursively deletes the job temporary directory, which is something like "gs://some/path/_temporary". If I understand correctly, on HDFS, this would be O(1), but on GCS (and every HCFS I know), this requires a full file tree walk. Deleting tens of thousands of objects in GCS takes several minutes.

I propose that commitTask() recursively deletes its the task attempt temp directory (something like "gs://some/path/_temporary/attempt1/task1"). On HDFS, this is O(1) per task, so this is very little overhead per task. On GCS (and other HCFSs), this adds parallelism for deleting the job temp directory.

With the attached patch, the repro above went from taking ~10 minutes to taking ~5 minutes, and task time did not significantly change.

Side note: I found this issue with Spark, but I assume it applies to a Mapreduce job with thousands of reducers as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org