You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/01/30 17:43:26 UTC

[GitHub] [iceberg] rdblue commented on a change in pull request #4008: [Site] - Document max_concurrent_deletes parameter in spark stored procedures.

rdblue commented on a change in pull request #4008:
URL: https://github.com/apache/iceberg/pull/4008#discussion_r795219794



##########
File path: site/docs/spark-procedures.md
##########
@@ -190,6 +190,7 @@ the `expire_snapshots` procedure will never remove files which are still require
 | `table`       | ✔️  | string | Name of the table to update |
 | `older_than`  | ️   | timestamp | Timestamp before which snapshots will be removed (Default: 5 days ago) |
 | `retain_last` |    | int       | Number of ancestor snapshots to preserve regardless of `older_than` (defaults to 1) |
+| `max_concurrent_deletes` |    | int       | Size of the thread pool used for delete file actions (defaults to null, which deletes files serially in the current thread without instantiating a dedicated thread pool) |

Review comment:
       The description is a bit long. How about "by default, no threadpool is used"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org