You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Gabor Bota (JIRA)" <ji...@apache.org> on 2019/01/04 15:15:00 UTC

[jira] [Updated] (HADOOP-16027) [DOC] Effective use of FS instances during S3A integration tests

     [ https://issues.apache.org/jira/browse/HADOOP-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Gabor Bota updated HADOOP-16027:
--------------------------------
    Attachment: HADOOP-16027.001.patch

> [DOC] Effective use of FS instances during S3A integration tests
> ----------------------------------------------------------------
>
>                 Key: HADOOP-16027
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16027
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Gabor Bota
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HADOOP-16027.001.patch
>
>
> While fixing HADOOP-15819 we found that a closed fs got into the static fs cache during testing, which caused other tests to fail when the tests were running sequentially.
> We should document some best practices in the testing section on the s3 docs with the following:
> {panel}
> Tests using FileSystems are fastest if they can recycle the existing FS instance from the same JVM. If you do that, you MUST NOT close or do unique configuration on them. If you want a guarantee of 100% isolation or an instance with unique config, create a new instance
> which you MUST close in the teardown to avoid leakage of resources.
> Do not add FileSystem instances (with e.g org.apache.hadoop.fs.FileSystem#addFileSystemForTesting) to the cache that will be modified or closed during the test runs. This can cause other tests to fail when using the same modified or closed FS instance. For more details see HADOOP-15819.
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org