You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/09/27 18:49:18 UTC

[GitHub] [iceberg] flyrain commented on pull request #5742: Spark: Test custom metric for number of deletes applied, in code path that use streaming delete filter

flyrain commented on PR #5742:
URL: https://github.com/apache/iceberg/pull/5742#issuecomment-1259916069

   Hi @wypoon, sorry I may not be clear in my last comment. Let me explain a bit more.
   1. No matter whether the pos deletes are streamed or not. Most logic has already been tested by the cases in `TestSparkReaderDeletes`, which including the logic of mix of pos deletes and eq deletes.
   2. The only thing we forget to test is that, in case of streaming pos deletes, whether the count logic is correct, mainly what happens in the class `PositionStreamDeleteFilter`. `TestPositionFilter` will the right place for a unit test.
   3. BTW, I’m OK with the system property approach if there is no other way to approach it. It’s a bit hacky. In this case, I'd think it is not necessary.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org