You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by vanzin <gi...@git.apache.org> on 2017/12/01 00:05:50 UTC
[GitHub] spark pull request #19848: [SPARK-22162] Executors and the driver should use...
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19848#discussion_r154236366
--- Diff: core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -102,14 +103,15 @@ object SparkHadoopWriter extends Logging {
context: TaskContext,
config: HadoopWriteConfigUtil[K, V],
jobTrackerId: String,
+ commitJobId: Int,
sparkStageId: Int,
sparkPartitionId: Int,
sparkAttemptNumber: Int,
committer: FileCommitProtocol,
iterator: Iterator[(K, V)]): TaskCommitMessage = {
// Set up a task.
val taskContext = config.createTaskAttemptContext(
- jobTrackerId, sparkStageId, sparkPartitionId, sparkAttemptNumber)
+ jobTrackerId, commitJobId, sparkPartitionId, sparkAttemptNumber)
--- End diff --
`sparkStageId` is now unused in this method.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org