You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/05/29 01:45:28 UTC
[GitHub] [spark] Ngone51 edited a comment on issue #24699:
[SPARK-27666][CORE] Do not release lock while TaskContext already completed
Ngone51 edited a comment on issue #24699: [SPARK-27666][CORE] Do not release lock while TaskContext already completed
URL: https://github.com/apache/spark/pull/24699#issuecomment-496751895
> Skip release lock if TaskContext has completed shall also resolve the issue
Do you @jiangxb1987 mean like this ?
```
val ci = CompletionIterator[Any, Iterator[Any]](iter, {
if (!taskContext.isCompleted()) {
releaseLock(blockId, taskAttemptId)
}
})
```
I was thinking about it, but for:
https://github.com/apache/spark/blob/e9f3f62b2c0f521f3cc23fef381fc6754853ad4f/core/src/main/scala/org/apache/spark/storage/BlockManager.scala#L764-L766
it seems we can't wrap an if condition around `releaseLockAndDispose` in the same way. We have to dispose data any way. Right ? So, we need to pass taskContext into `releaseLockAndDispose`. In `releaseLockAndDispose`:
https://github.com/apache/spark/blob/e9f3f62b2c0f521f3cc23fef381fc6754853ad4f/core/src/main/scala/org/apache/spark/storage/BlockManager.scala#L1666-L1672
We could also warp a if condition around `releaseLock`. But, I think it may be better to reduce duplicate code, so, I move the logic into `releaseLock` itself finally.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org