You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/07/16 01:56:40 UTC

[GitHub] [iceberg] ajantha-bhat commented on pull request #5283: [WIP] Core: Fix drop table without purge for hadoop catalog

ajantha-bhat commented on PR #5283:
URL: https://github.com/apache/iceberg/pull/5283#issuecomment-1186054427

   The problem with the test cases is that by default, spark is calling "DROP TABLE" SQL, which doesn't purge the data.
   But because warehouse is temp dir, clean-up is happening at the end of each test case automatically.
   
   Now that hadoop catalog, I am supporting purge = false. So, testcase will not clean the data and getting table exists error.
   
   Also, even without the version-hint files,  hadoop catalog prepares version by reading the metadata file name. 
   
   Probably I need to modify test cases to use "DROP TABLE PURGE" SQL or stop deriving the version info when the version file doesn't exist( but not sure about the impact)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org