You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "virajjasani (via GitHub)" <gi...@apache.org> on 2023/06/01 01:48:39 UTC

[GitHub] [hadoop] virajjasani commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1212472590


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##########
@@ -268,12 +310,15 @@ public void close() throws IOException {
     int numFilesDeleted = 0;
 
     for (Entry entry : blocks.values()) {
+      entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   > also, L303: should closed be atomic?
   
   +1 to this suggestion, let me create a separate patch with HADOOP-18756 to better track it.
   
   > good: no race condition in close
   > bad) the usual
   
   sounds reasonable, let me try setting timeout



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org