You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/06/04 16:07:43 UTC

[GitHub] [hadoop-ozone] smengcl opened a new pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

smengcl opened a new pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021


   ## What changes were proposed in this pull request?
   
   Implement a new scheme for Ozone Filesystem where all volumes (and buckets) can be accessed from a single root.
   
   Also known as Rooted Ozone Filesystem.
   
   This PR combines commits in feature branch [`HDDS-2665-ofs`](https://github.com/apache/hadoop-ozone/commits/HDDS-2665-ofs) for review and discussion.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2665
   
   ## How was this patch tested?
   
   Added FileSystem contract tests for ofs://.
   
   Added new integration tests.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] removed a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-638971553


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [6 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [4 Security Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [67 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='0.0%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [0.0% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='15.8%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [15.8% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/e25c7a6b6503224bb0b74fbcc142a953a5cc9480&el=desc) will **increase** coverage by `0.08%`.
   > The diff coverage is `71.82%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   + Coverage     70.48%   70.57%   +0.08%     
   - Complexity     9260     9405     +145     
   ============================================
     Files           961      965       +4     
     Lines         48177    48979     +802     
     Branches       4678     4790     +112     
   ============================================
   + Hits          33959    34565     +606     
   - Misses        11968    12104     +136     
   - Partials       2250     2310      +60     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `70.05% <0.00%> (-0.38%)` | `28.00 <0.00> (ø)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `75.24% <ø> (ø)` | `51.00 <0.00> (ø)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `41.66% <41.66%> (ø)` | `2.00 <2.00> (?)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `68.45% <68.45%> (ø)` | `47.00 <47.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `74.40% <74.40%> (ø)` | `50.00 <50.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `79.59% <79.59%> (ø)` | `37.00 <37.00> (?)` | |
   | [...hdds/scm/container/common/helpers/ExcludeList.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vY29udGFpbmVyL2NvbW1vbi9oZWxwZXJzL0V4Y2x1ZGVMaXN0LmphdmE=) | `83.67% <0.00%> (-14.29%)` | `20.00% <0.00%> (-4.00%)` | |
   | [...che/hadoop/hdds/scm/pipeline/PipelineStateMap.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL3BpcGVsaW5lL1BpcGVsaW5lU3RhdGVNYXAuamF2YQ==) | `83.04% <0.00%> (-4.10%)` | `45.00% <0.00%> (-3.00%)` | |
   | [...ent/algorithms/SCMContainerPlacementRackAware.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9wbGFjZW1lbnQvYWxnb3JpdGhtcy9TQ01Db250YWluZXJQbGFjZW1lbnRSYWNrQXdhcmUuamF2YQ==) | `76.69% <0.00%> (-3.01%)` | `31.00% <0.00%> (-2.00%)` | |
   | ... and [23 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [c5b0ba6...04dc11c](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **increase** coverage by `0.96%`.
   > The diff coverage is `71.82%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   + Coverage     69.45%   70.42%   +0.96%     
   - Complexity     9112     9376     +264     
   ============================================
     Files           961      965       +4     
     Lines         48148    48934     +786     
     Branches       4679     4788     +109     
   ============================================
   + Hits          33443    34460    +1017     
   + Misses        12486    12158     -328     
   - Partials       2219     2316      +97     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `70.05% <0.00%> (+70.05%)` | `28.00 <0.00> (+28.00)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `75.24% <ø> (+75.24%)` | `51.00 <0.00> (+51.00)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `41.66% <41.66%> (ø)` | `2.00 <2.00> (?)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `68.45% <68.45%> (ø)` | `47.00 <47.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `74.40% <74.40%> (ø)` | `50.00 <50.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `79.59% <79.59%> (ø)` | `37.00 <37.00> (?)` | |
   | [...p/ozone/om/ratis/utils/OzoneManagerRatisUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy91dGlscy9Pem9uZU1hbmFnZXJSYXRpc1V0aWxzLmphdmE=) | `67.44% <0.00%> (-19.13%)` | `39.00% <0.00%> (ø%)` | |
   | [...che/hadoop/ozone/om/ratis/OMRatisSnapshotInfo.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy9PTVJhdGlzU25hcHNob3RJbmZvLmphdmE=) | `83.33% <0.00%> (-10.67%)` | `7.00% <0.00%> (-5.00%)` | |
   | [...hdds/scm/container/common/helpers/ExcludeList.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vY29udGFpbmVyL2NvbW1vbi9oZWxwZXJzL0V4Y2x1ZGVMaXN0LmphdmE=) | `75.51% <0.00%> (-8.17%)` | `18.00% <0.00%> (-2.00%)` | |
   | ... and [42 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...a8b9efd](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **decrease** coverage by `1.04%`.
   > The diff coverage is `0.00%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   - Coverage     69.45%   68.41%   -1.05%     
   - Complexity     9112     9131      +19     
   ============================================
     Files           961      965       +4     
     Lines         48148    48950     +802     
     Branches       4679     4791     +112     
   ============================================
   + Hits          33443    33490      +47     
   - Misses        12486    13250     +764     
   + Partials       2219     2210       -9     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh) | `72.41% <0.00%> (-17.25%)` | `6.00% <0.00%> (ø%)` | |
   | [...va/org/apache/hadoop/hdds/utils/db/RDBMetrics.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy91dGlscy9kYi9SREJNZXRyaWNzLmphdmE=) | `85.71% <0.00%> (-7.15%)` | `13.00% <0.00%> (-1.00%)` | |
   | [...e/commandhandler/CloseContainerCommandHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3N0YXRlbWFjaGluZS9jb21tYW5kaGFuZGxlci9DbG9zZUNvbnRhaW5lckNvbW1hbmRIYW5kbGVyLmphdmE=) | `82.45% <0.00%> (-3.51%)` | `11.00% <0.00%> (ø%)` | |
   | ... and [22 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...2eb3181](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644930394


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [2 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [38 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/60.png' alt='76.1%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [76.1% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='16.9%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [16.9% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-639817700


   Will open up a new jira to fix sonarcloud bugs, and address merge conflicts.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644420293


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [41 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='8.4%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [8.4% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.9%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.9% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644918067


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [5 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [43 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='10.7%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [10.7% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.1%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.1% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] removed a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644918067


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [5 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [43 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='10.7%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [10.7% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.1%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.1% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-641415084


   > Thanks to drive this effort @smengcl. Overall it looks good to me. I agree that we are in the last step before the merge.
   > 
   > I have some questions about the code.
   > 
   > (And I feel myself guilty of the conflict, I can explain the changes on the master what I did, or I can help to rebase it.)
   
   Thanks for the comment @elek .
   
   The merge conflict comes from HDDS-3627 ([commit](https://github.com/apache/hadoop-ozone/commit/072370b947416d89fae11d00a84a1d9a6b31beaa)) as far as I can tell. Shouldn't be a big problem. It is always a delight to see good refactoring. :)
   
   A question though. I notice `TestOzoneFileSystemWithMocks` being removed in HDDS-3627, where in OFS I forked it to create `TestRootedOzoneFileSystemWithMocks`. Should I relocate the latter to somewhere else or just remove it as well?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] elek commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
elek commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r436736394



##########
File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##########
@@ -0,0 +1,876 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneKeyDetails;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.VolumeArgs;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.OzoneAclConfig;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_FOUND;
+
+/**
+ * Ozone file system tests that are not covered by contract tests.
+ * TODO: Refactor this and TestOzoneFileSystem later to reduce code duplication.
+ */
+public class TestRootedOzoneFileSystem {
+
+  @Rule
+  public Timeout globalTimeout = new Timeout(300_000);
+
+  private OzoneConfiguration conf;
+  private MiniOzoneCluster cluster = null;
+  private FileSystem fs;
+  private RootedOzoneFileSystem ofs;
+  private ObjectStore objectStore;
+  private static BasicRootedOzoneClientAdapterImpl adapter;
+
+  private String volumeName;
+  private String bucketName;
+  // Store path commonly used by tests that test functionality within a bucket
+  private Path testBucketPath;
+  private String rootPath;
+
+  @Before
+  public void init() throws Exception {
+    conf = new OzoneConfiguration();
+    cluster = MiniOzoneCluster.newBuilder(conf)
+        .setNumDatanodes(3)
+        .build();
+    cluster.waitForClusterToBeReady();
+    objectStore = cluster.getClient().getObjectStore();
+
+    // create a volume and a bucket to be used by RootedOzoneFileSystem (OFS)
+    OzoneBucket bucket = TestDataUtil.createVolumeAndBucket(cluster);
+    volumeName = bucket.getVolumeName();
+    bucketName = bucket.getName();
+    String testBucketStr =
+        OZONE_URI_DELIMITER + volumeName + OZONE_URI_DELIMITER + bucketName;
+    testBucketPath = new Path(testBucketStr);
+
+    rootPath = String.format("%s://%s/",
+        OzoneConsts.OZONE_OFS_URI_SCHEME, conf.get(OZONE_OM_ADDRESS_KEY));
+
+    // Set the fs.defaultFS and start the filesystem
+    conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, rootPath);
+    // Note: FileSystem#loadFileSystems won't load OFS class due to META-INF
+    //  hence this workaround.
+    conf.set("fs.ofs.impl", "org.apache.hadoop.fs.ozone.RootedOzoneFileSystem");
+    fs = FileSystem.get(conf);
+    ofs = (RootedOzoneFileSystem) fs;
+    adapter = (BasicRootedOzoneClientAdapterImpl) ofs.getAdapter();
+  }
+
+  @After
+  public void teardown() {
+    if (cluster != null) {
+      cluster.shutdown();
+    }
+    IOUtils.closeQuietly(fs);
+  }
+
+  @Test
+  public void testOzoneFsServiceLoader() throws IOException {
+    OzoneConfiguration confTestLoader = new OzoneConfiguration();
+    // Note: FileSystem#loadFileSystems won't load OFS class due to META-INF
+    //  hence this workaround.
+    confTestLoader.set("fs.ofs.impl",
+        "org.apache.hadoop.fs.ozone.RootedOzoneFileSystem");
+    Assert.assertEquals(FileSystem.getFileSystemClass(
+        OzoneConsts.OZONE_OFS_URI_SCHEME, confTestLoader),
+        RootedOzoneFileSystem.class);
+  }
+
+  @Test
+  public void testCreateDoesNotAddParentDirKeys() throws Exception {
+    Path grandparent = new Path(testBucketPath,
+        "testCreateDoesNotAddParentDirKeys");
+    Path parent = new Path(grandparent, "parent");
+    Path child = new Path(parent, "child");
+    ContractTestUtils.touch(fs, child);
+
+    OzoneKeyDetails key = getKey(child, false);
+    OFSPath childOFSPath = new OFSPath(child);
+    Assert.assertEquals(key.getName(), childOFSPath.getKeyName());
+
+    // Creating a child should not add parent keys to the bucket
+    try {
+      getKey(parent, true);
+    } catch (IOException ex) {
+      assertKeyNotFoundException(ex);
+    }
+
+    // List status on the parent should show the child file
+    Assert.assertEquals(
+        "List status of parent should include the 1 child file",
+        1L, fs.listStatus(parent).length);
+    Assert.assertTrue(
+        "Parent directory does not appear to be a directory",
+        fs.getFileStatus(parent).isDirectory());
+  }
+
+  @Test
+  public void testDeleteCreatesFakeParentDir() throws Exception {
+    Path grandparent = new Path(testBucketPath,
+        "testDeleteCreatesFakeParentDir");
+    Path parent = new Path(grandparent, "parent");
+    Path child = new Path(parent, "child");
+    ContractTestUtils.touch(fs, child);
+
+    // Verify that parent dir key does not exist
+    // Creating a child should not add parent keys to the bucket
+    try {
+      getKey(parent, true);
+    } catch (IOException ex) {
+      assertKeyNotFoundException(ex);
+    }
+
+    // Delete the child key
+    Assert.assertTrue(fs.delete(child, false));
+
+    // Deleting the only child should create the parent dir key if it does
+    // not exist
+    OFSPath parentOFSPath = new OFSPath(parent);
+    String parentKey = parentOFSPath.getKeyName() + "/";
+    OzoneKeyDetails parentKeyInfo = getKey(parent, true);
+    Assert.assertEquals(parentKey, parentKeyInfo.getName());
+
+    // Recursive delete with DeleteIterator
+    Assert.assertTrue(fs.delete(grandparent, true));
+  }
+
+  @Test
+  public void testListStatus() throws Exception {
+    Path parent = new Path(testBucketPath, "testListStatus");
+    Path file1 = new Path(parent, "key1");
+    Path file2 = new Path(parent, "key2");
+
+    FileStatus[] fileStatuses = ofs.listStatus(testBucketPath);
+    Assert.assertEquals("Should be empty", 0, fileStatuses.length);
+
+    ContractTestUtils.touch(fs, file1);
+    ContractTestUtils.touch(fs, file2);
+
+    fileStatuses = ofs.listStatus(testBucketPath);
+    Assert.assertEquals("Should have created parent",
+        1, fileStatuses.length);
+    Assert.assertEquals("Parent path doesn't match",
+        fileStatuses[0].getPath().toUri().getPath(), parent.toString());
+
+    // ListStatus on a directory should return all subdirs along with
+    // files, even if there exists a file and sub-dir with the same name.
+    fileStatuses = ofs.listStatus(parent);
+    Assert.assertEquals(
+        "FileStatus did not return all children of the directory",
+        2, fileStatuses.length);
+
+    // ListStatus should return only the immediate children of a directory.
+    Path file3 = new Path(parent, "dir1/key3");
+    Path file4 = new Path(parent, "dir1/key4");
+    ContractTestUtils.touch(fs, file3);
+    ContractTestUtils.touch(fs, file4);
+    fileStatuses = ofs.listStatus(parent);
+    Assert.assertEquals(
+        "FileStatus did not return all children of the directory",
+        3, fileStatuses.length);
+  }
+
+  /**
+   * OFS: Helper function for tests. Return a volume name that doesn't exist.
+   */
+  private String getRandomNonExistVolumeName() throws IOException {
+    final int numDigit = 5;
+    long retriesLeft = Math.round(Math.pow(10, 5));
+    String name = null;
+    while (name == null && retriesLeft-- > 0) {
+      name = "volume-" + RandomStringUtils.randomNumeric(numDigit);
+      // Check volume existence.
+      Iterator<? extends OzoneVolume> iter =
+          objectStore.listVolumesByUser(null, name, null);
+      if (iter.hasNext()) {
+        // If there is a match, try again.
+        // Note that volume name prefix match doesn't equal volume existence
+        //  but the check is sufficient for this test.
+        name = null;
+      }
+    }
+    if (retriesLeft <= 0) {
+      Assert.fail(
+          "Failed to generate random volume name that doesn't exist already.");
+    }
+    return name;
+  }
+
+  /**
+   * OFS: Test mkdir on volume, bucket and dir that doesn't exist.
+   */
+  @Test
+  public void testMkdirOnNonExistentVolumeBucketDir() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    String bucketNameLocal = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path root = new Path("/" + volumeNameLocal + "/" + bucketNameLocal);
+    Path dir1 = new Path(root, "dir1");
+    Path dir12 = new Path(dir1, "dir12");
+    Path dir2 = new Path(root, "dir2");
+    fs.mkdirs(dir12);
+    fs.mkdirs(dir2);
+
+    // Check volume and bucket existence, they should both be created.
+    OzoneVolume ozoneVolume = objectStore.getVolume(volumeNameLocal);
+    OzoneBucket ozoneBucket = ozoneVolume.getBucket(bucketNameLocal);
+    OFSPath ofsPathDir1 = new OFSPath(dir12);
+    String key = ofsPathDir1.getKeyName() + "/";
+    OzoneKeyDetails ozoneKeyDetails = ozoneBucket.getKey(key);
+    Assert.assertEquals(key, ozoneKeyDetails.getName());
+
+    // Verify that directories are created.
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        fileStatuses[0].getPath().toUri().getPath(), dir1.toString());
+    Assert.assertEquals(
+        fileStatuses[1].getPath().toUri().getPath(), dir2.toString());
+
+    fileStatuses = ofs.listStatus(dir1);
+    Assert.assertEquals(
+        fileStatuses[0].getPath().toUri().getPath(), dir12.toString());
+    fileStatuses = ofs.listStatus(dir12);
+    Assert.assertEquals(fileStatuses.length, 0);
+    fileStatuses = ofs.listStatus(dir2);
+    Assert.assertEquals(fileStatuses.length, 0);
+  }
+
+  /**
+   * OFS: Test mkdir on a volume and bucket that doesn't exist.
+   */
+  @Test
+  public void testMkdirNonExistentVolumeBucket() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    String bucketNameLocal = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path newVolBucket = new Path(
+        "/" + volumeNameLocal + "/" + bucketNameLocal);
+    fs.mkdirs(newVolBucket);
+
+    // Verify with listVolumes and listBuckets
+    Iterator<? extends OzoneVolume> iterVol =
+        objectStore.listVolumesByUser(null, volumeNameLocal, null);
+    OzoneVolume ozoneVolume = iterVol.next();
+    Assert.assertNotNull(ozoneVolume);
+    Assert.assertEquals(volumeNameLocal, ozoneVolume.getName());
+
+    Iterator<? extends OzoneBucket> iterBuc =
+        ozoneVolume.listBuckets("bucket-");
+    OzoneBucket ozoneBucket = iterBuc.next();
+    Assert.assertNotNull(ozoneBucket);
+    Assert.assertEquals(bucketNameLocal, ozoneBucket.getName());
+
+    // TODO: Use listStatus to check volume and bucket creation in HDDS-2928.
+  }
+
+  /**
+   * OFS: Test mkdir on a volume that doesn't exist.
+   */
+  @Test
+  public void testMkdirNonExistentVolume() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    Path newVolume = new Path("/" + volumeNameLocal);
+    fs.mkdirs(newVolume);
+
+    // Verify with listVolumes and listBuckets
+    Iterator<? extends OzoneVolume> iterVol =
+        objectStore.listVolumesByUser(null, volumeNameLocal, null);
+    OzoneVolume ozoneVolume = iterVol.next();
+    Assert.assertNotNull(ozoneVolume);
+    Assert.assertEquals(volumeNameLocal, ozoneVolume.getName());
+
+    // TODO: Use listStatus to check volume and bucket creation in HDDS-2928.
+  }
+
+  /**
+   * OFS: Test getFileStatus on root.
+   */
+  @Test
+  public void testGetFileStatusRoot() throws Exception {
+    Path root = new Path("/");
+    FileStatus fileStatus = fs.getFileStatus(root);
+    Assert.assertNotNull(fileStatus);
+    Assert.assertEquals(new Path(rootPath), fileStatus.getPath());
+    Assert.assertTrue(fileStatus.isDirectory());
+    Assert.assertEquals(FsPermission.getDirDefault(),
+        fileStatus.getPermission());
+  }
+
+  /**
+   * Test listStatus operation in a bucket.
+   */
+  @Test
+  public void testListStatusInBucket() throws Exception {
+    Path root = new Path("/" + volumeName + "/" + bucketName);
+    Path dir1 = new Path(root, "dir1");
+    Path dir12 = new Path(dir1, "dir12");
+    Path dir2 = new Path(root, "dir2");
+    fs.mkdirs(dir12);
+    fs.mkdirs(dir2);
+
+    // ListStatus on root should return dir1 (even though /dir1 key does not
+    // exist) and dir2 only. dir12 is not an immediate child of root and
+    // hence should not be listed.
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        "FileStatus should return only the immediate children",
+        2, fileStatuses.length);
+
+    // Verify that dir12 is not included in the result of the listStatus on root
+    String fileStatus1 = fileStatuses[0].getPath().toUri().getPath();
+    String fileStatus2 = fileStatuses[1].getPath().toUri().getPath();
+    Assert.assertNotEquals(fileStatus1, dir12.toString());
+    Assert.assertNotEquals(fileStatus2, dir12.toString());
+  }
+
+  /**
+   * Tests listStatus operation on root directory.
+   */
+  @Test
+  public void testListStatusOnLargeDirectory() throws Exception {
+    Path root = new Path("/" + volumeName + "/" + bucketName);
+    Set<String> paths = new TreeSet<>();
+    int numDirs = LISTING_PAGE_SIZE + LISTING_PAGE_SIZE / 2;
+    for(int i = 0; i < numDirs; i++) {
+      Path p = new Path(root, String.valueOf(i));
+      fs.mkdirs(p);
+      paths.add(p.getName());
+    }
+
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        "Total directories listed do not match the existing directories",
+        numDirs, fileStatuses.length);
+
+    for (int i=0; i < numDirs; i++) {
+      Assert.assertTrue(paths.contains(fileStatuses[i].getPath().getName()));
+    }
+  }
+
+  /**
+   * Tests listStatus on a path with subdirs.
+   */
+  @Test
+  public void testListStatusOnSubDirs() throws Exception {
+    // Create the following key structure
+    //      /dir1/dir11/dir111
+    //      /dir1/dir12
+    //      /dir1/dir12/file121
+    //      /dir2
+    // ListStatus on /dir1 should return all its immediated subdirs only
+    // which are /dir1/dir11 and /dir1/dir12. Super child files/dirs
+    // (/dir1/dir12/file121 and /dir1/dir11/dir111) should not be returned by
+    // listStatus.
+    Path dir1 = new Path(testBucketPath, "dir1");
+    Path dir11 = new Path(dir1, "dir11");
+    Path dir111 = new Path(dir11, "dir111");
+    Path dir12 = new Path(dir1, "dir12");
+    Path file121 = new Path(dir12, "file121");
+    Path dir2 = new Path(testBucketPath, "dir2");
+    fs.mkdirs(dir111);
+    fs.mkdirs(dir12);
+    ContractTestUtils.touch(fs, file121);
+    fs.mkdirs(dir2);
+
+    FileStatus[] fileStatuses = ofs.listStatus(dir1);
+    Assert.assertEquals(
+        "FileStatus should return only the immediate children",
+        2, fileStatuses.length);
+
+    // Verify that the two children of /dir1 returned by listStatus operation
+    // are /dir1/dir11 and /dir1/dir12.
+    String fileStatus1 = fileStatuses[0].getPath().toUri().getPath();
+    String fileStatus2 = fileStatuses[1].getPath().toUri().getPath();
+    Assert.assertTrue(fileStatus1.equals(dir11.toString()) ||
+        fileStatus1.equals(dir12.toString()));
+    Assert.assertTrue(fileStatus2.equals(dir11.toString()) ||
+        fileStatus2.equals(dir12.toString()));
+  }
+
+  @Test
+  public void testNonExplicitlyCreatedPathExistsAfterItsLeafsWereRemoved()
+      throws Exception {
+    Path source = new Path(testBucketPath, "source");
+    Path interimPath = new Path(source, "interimPath");
+    Path leafInsideInterimPath = new Path(interimPath, "leaf");
+    Path target = new Path(testBucketPath, "target");
+    Path leafInTarget = new Path(target, "leaf");
+
+    fs.mkdirs(source);
+    fs.mkdirs(target);
+    fs.mkdirs(leafInsideInterimPath);
+
+    Assert.assertTrue(fs.rename(leafInsideInterimPath, leafInTarget));
+
+    // after rename listStatus for interimPath should succeed and
+    // interimPath should have no children
+    FileStatus[] statuses = fs.listStatus(interimPath);
+    Assert.assertNotNull("liststatus returns a null array", statuses);
+    Assert.assertEquals("Statuses array is not empty", 0, statuses.length);
+    FileStatus fileStatus = fs.getFileStatus(interimPath);
+    Assert.assertEquals("FileStatus does not point to interimPath",
+        interimPath.getName(), fileStatus.getPath().getName());
+  }
+
+  /**
+   * OFS: Try to rename a key to a different bucket. The attempt should fail.
+   */
+  @Test
+  public void testRenameToDifferentBucket() throws IOException {
+    Path source = new Path(testBucketPath, "source");
+    Path interimPath = new Path(source, "interimPath");
+    Path leafInsideInterimPath = new Path(interimPath, "leaf");
+    Path target = new Path(testBucketPath, "target");
+
+    fs.mkdirs(source);
+    fs.mkdirs(target);
+    fs.mkdirs(leafInsideInterimPath);
+
+    // Attempt to rename the key to a different bucket
+    Path bucket2 = new Path(OZONE_URI_DELIMITER + volumeName +
+        OZONE_URI_DELIMITER + bucketName + "test");
+    Path leafInTargetInAnotherBucket = new Path(bucket2, "leaf");
+    try {
+      fs.rename(leafInsideInterimPath, leafInTargetInAnotherBucket);
+      Assert.fail(
+          "Should have thrown exception when renaming to a different bucket");
+    } catch (IOException ignored) {
+      // Test passed. Exception thrown as expected.
+    }
+  }
+
+  private OzoneKeyDetails getKey(Path keyPath, boolean isDirectory)
+      throws IOException {
+    String key = ofs.pathToKey(keyPath);
+    if (isDirectory) {
+      key = key + OZONE_URI_DELIMITER;
+    }
+    OFSPath ofsPath = new OFSPath(key);
+    String keyInBucket = ofsPath.getKeyName();
+    return cluster.getClient().getObjectStore().getVolume(volumeName)
+        .getBucket(bucketName).getKey(keyInBucket);
+  }
+
+  private void assertKeyNotFoundException(IOException ex) {
+    GenericTestUtils.assertExceptionContains("KEY_NOT_FOUND", ex);
+  }
+
+  /**
+   * Helper function for testListStatusRootAndVolume*.
+   * Each call creates one volume, one bucket under that volume,
+   * two dir under that bucket, one subdir under one of the dirs,
+   * and one file under the subdir.
+   */
+  private Path createRandomVolumeBucketWithDirs() throws IOException {
+    String volume1 = getRandomNonExistVolumeName();
+    String bucket1 = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path bucketPath1 = new Path(
+        OZONE_URI_DELIMITER + volume1 + OZONE_URI_DELIMITER + bucket1);
+
+    Path dir1 = new Path(bucketPath1, "dir1");
+    fs.mkdirs(dir1);  // Intentionally creating this "in-the-middle" dir key
+    Path subdir1 = new Path(dir1, "subdir1");
+    fs.mkdirs(subdir1);
+    Path dir2 = new Path(bucketPath1, "dir2");
+    fs.mkdirs(dir2);
+
+    try (FSDataOutputStream stream =
+        ofs.create(new Path(dir2, "file1"))) {
+      stream.write(1);
+    }
+
+    return bucketPath1;
+  }
+
+  /**
+   * OFS: Test non-recursive listStatus on root and volume.
+   */
+  @Test
+  public void testListStatusRootAndVolumeNonRecursive() throws Exception {
+    Path bucketPath1 = createRandomVolumeBucketWithDirs();
+    createRandomVolumeBucketWithDirs();
+    // listStatus("/volume/bucket")
+    FileStatus[] fileStatusBucket = ofs.listStatus(bucketPath1);
+    Assert.assertEquals(2, fileStatusBucket.length);
+    // listStatus("/volume")
+    Path volume = new Path(
+        OZONE_URI_DELIMITER + new OFSPath(bucketPath1).getVolumeName());
+    FileStatus[] fileStatusVolume = ofs.listStatus(volume);
+    Assert.assertEquals(1, fileStatusVolume.length);
+    // listStatus("/")
+    Path root = new Path(OZONE_URI_DELIMITER);
+    FileStatus[] fileStatusRoot = ofs.listStatus(root);
+    Assert.assertEquals(2, fileStatusRoot.length);
+  }
+
+  /**
+   * Helper function to do FileSystem#listStatus recursively.
+   * Simulate what FsShell does, using DFS.
+   */
+  private void listStatusRecursiveHelper(Path curPath, List<FileStatus> result)
+      throws IOException {
+    FileStatus[] startList = ofs.listStatus(curPath);
+    for (FileStatus fileStatus : startList) {
+      result.add(fileStatus);
+      if (fileStatus.isDirectory()) {
+        Path nextPath = fileStatus.getPath();
+        listStatusRecursiveHelper(nextPath, result);
+      }
+    }
+  }
+
+  /**
+   * Helper function to call listStatus in adapter implementation.
+   */
+  private List<FileStatus> callAdapterListStatus(String pathStr,
+      boolean recursive, String startPath, long numEntries) throws IOException {
+    return adapter.listStatus(pathStr, recursive, startPath, numEntries,
+        ofs.getUri(), ofs.getWorkingDirectory(), ofs.getUsername())
+        .stream().map(ofs::convertFileStatus).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to compare recursive listStatus results from adapter
+   * and (simulated) FileSystem.
+   */
+  private void listStatusCheckHelper(Path path) throws IOException {
+    // Get recursive listStatus result directly from adapter impl
+    List<FileStatus> statusesFromAdapter = callAdapterListStatus(
+        path.toString(), true, "", 1000);
+    // Get recursive listStatus result with FileSystem API by simulating FsShell
+    List<FileStatus> statusesFromFS = new ArrayList<>();
+    listStatusRecursiveHelper(path, statusesFromFS);
+    // Compare. The results would be in the same order due to assumptions:
+    // 1. They are both using DFS internally;
+    // 2. They both return ordered results.
+    Assert.assertEquals(statusesFromAdapter.size(), statusesFromFS.size());
+    final int n = statusesFromFS.size();
+    for (int i = 0; i < n; i++) {
+      FileStatus statusFromAdapter = statusesFromAdapter.get(i);
+      FileStatus statusFromFS = statusesFromFS.get(i);
+      Assert.assertEquals(statusFromAdapter.getPath(), statusFromFS.getPath());
+      Assert.assertEquals(statusFromAdapter.getLen(), statusFromFS.getLen());
+      Assert.assertEquals(statusFromAdapter.isDirectory(),
+          statusFromFS.isDirectory());
+      // TODO: When HDDS-3054 is in, uncomment the lines below.

Review comment:
       HDDS-3054 seems to be resolved as far as I see.

##########
File path: hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystemWithMocks.java
##########
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.api.mockito.PowerMockito;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.net.URI;
+
+import static org.junit.Assert.assertEquals;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Ozone File system tests that are light weight and use mocks.
+ */
+@RunWith(PowerMockRunner.class)
+@PrepareForTest({ OzoneClientFactory.class, UserGroupInformation.class })
+@PowerMockIgnore("javax.management.*")
+public class TestRootedOzoneFileSystemWithMocks {

Review comment:
       FYI: there is a full, in memory implementation of `ObjectStore` in `s3` project. Can be useful for similar tests if we move it to a common place. (BTW: I like this lightweight test). 

##########
File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
##########
@@ -0,0 +1,904 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.Progressable;
+import org.apache.http.client.utils.URIBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_EMPTY;
+
+/**
+ * The minimal Ozone Filesystem implementation.
+ * <p>
+ * This is a basic version which doesn't extend
+ * KeyProviderTokenIssuer and doesn't include statistics. It can be used
+ * from older hadoop version. For newer hadoop version use the full featured
+ * BasicRootedOzoneFileSystem.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class BasicRootedOzoneFileSystem extends FileSystem {
+  static final Logger LOG =
+      LoggerFactory.getLogger(BasicRootedOzoneFileSystem.class);
+
+  /**
+   * The Ozone client for connecting to Ozone server.
+   */
+
+  private URI uri;
+  private String userName;
+  private Path workingDir;
+  private OzoneClientAdapter adapter;
+  private BasicRootedOzoneClientAdapterImpl adapterImpl;

Review comment:
       Why don't we use the interface instead of implementation?

##########
File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
##########
@@ -0,0 +1,904 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.Progressable;
+import org.apache.http.client.utils.URIBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_EMPTY;
+
+/**
+ * The minimal Ozone Filesystem implementation.
+ * <p>
+ * This is a basic version which doesn't extend
+ * KeyProviderTokenIssuer and doesn't include statistics. It can be used
+ * from older hadoop version. For newer hadoop version use the full featured
+ * BasicRootedOzoneFileSystem.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class BasicRootedOzoneFileSystem extends FileSystem {
+  static final Logger LOG =
+      LoggerFactory.getLogger(BasicRootedOzoneFileSystem.class);
+
+  /**
+   * The Ozone client for connecting to Ozone server.
+   */
+
+  private URI uri;
+  private String userName;
+  private Path workingDir;
+  private OzoneClientAdapter adapter;
+  private BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+  private static final String URI_EXCEPTION_TEXT =
+      "URL should be one of the following formats: " +
+      "ofs://om-service-id/path/to/key  OR " +
+      "ofs://om-host.example.com/path/to/key  OR " +
+      "ofs://om-host.example.com:5678/path/to/key";
+
+  @Override
+  public void initialize(URI name, Configuration conf) throws IOException {
+    super.initialize(name, conf);
+    setConf(conf);
+    Objects.requireNonNull(name.getScheme(), "No scheme provided in " + name);
+    Preconditions.checkArgument(getScheme().equals(name.getScheme()),
+        "Invalid scheme provided in " + name);
+
+    String authority = name.getAuthority();
+    if (authority == null) {
+      // authority is null when fs.defaultFS is not a qualified ofs URI and
+      // ofs:/// is passed to the client. matcher will NPE if authority is null
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+
+    String omHostOrServiceId;
+    int omPort = -1;
+    // Parse hostname and port
+    String[] parts = authority.split(":");
+    if (parts.length > 2) {
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+    omHostOrServiceId = parts[0];
+    if (parts.length == 2) {
+      try {
+        omPort = Integer.parseInt(parts[1]);
+      } catch (NumberFormatException e) {
+        throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+      }
+    }
+
+    try {
+      uri = new URIBuilder().setScheme(OZONE_OFS_URI_SCHEME)
+          .setHost(authority)
+          .build();
+      LOG.trace("Ozone URI for OFS initialization is " + uri);
+
+      //isolated is the default for ozonefs-lib-legacy which includes the
+      // /ozonefs.txt, otherwise the default is false. It could be overridden.
+      boolean defaultValue =
+          BasicRootedOzoneFileSystem.class.getClassLoader()
+              .getResource("ozonefs.txt") != null;
+
+      //Use string here instead of the constant as constant may not be available
+      //on the classpath of a hadoop 2.7
+      boolean isolatedClassloader =
+          conf.getBoolean("ozone.fs.isolated-classloader", defaultValue);
+
+      ConfigurationSource source;
+      if (conf instanceof OzoneConfiguration) {
+        source = (ConfigurationSource) conf;
+      } else {
+        source = new LegacyHadoopConfigurationSource(conf);
+      }
+      this.adapter =
+          createAdapter(source,
+              omHostOrServiceId, omPort,
+              isolatedClassloader);
+      this.adapterImpl = (BasicRootedOzoneClientAdapterImpl) this.adapter;
+
+      try {
+        this.userName =
+            UserGroupInformation.getCurrentUser().getShortUserName();
+      } catch (IOException e) {
+        this.userName = OZONE_DEFAULT_USER;
+      }
+      this.workingDir = new Path(OZONE_USER_DIR, this.userName)
+          .makeQualified(this.uri, this.workingDir);
+    } catch (URISyntaxException ue) {
+      final String msg = "Invalid Ozone endpoint " + name;
+      LOG.error(msg, ue);
+      throw new IOException(msg, ue);
+    }
+  }
+
+  protected OzoneClientAdapter createAdapter(ConfigurationSource conf,
+      String omHost, int omPort, boolean isolatedClassloader)
+      throws IOException {
+
+    if (isolatedClassloader) {
+      return OzoneClientAdapterFactory.createAdapter();
+    } else {
+      return new BasicRootedOzoneClientAdapterImpl(omHost, omPort, conf);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    try {
+      adapter.close();
+    } finally {
+      super.close();
+    }
+  }
+
+  @Override
+  public URI getUri() {
+    return uri;
+  }
+
+  @Override
+  public String getScheme() {
+    return OZONE_OFS_URI_SCHEME;
+  }
+
+  @Override
+  public FSDataInputStream open(Path path, int bufferSize) throws IOException {
+    incrementCounter(Statistic.INVOCATION_OPEN);
+    statistics.incrementReadOps(1);
+    LOG.trace("open() path: {}", path);
+    final String key = pathToKey(path);
+    return new FSDataInputStream(
+        new OzoneFSInputStream(adapter.readFile(key), statistics));
+  }
+
+  protected void incrementCounter(Statistic statistic) {
+    //don't do anything in this default implementation.
+  }
+
+  @Override
+  public FSDataOutputStream create(Path f, FsPermission permission,
+      boolean overwrite, int bufferSize,
+      short replication, long blockSize,
+      Progressable progress) throws IOException {
+    LOG.trace("create() path:{}", f);
+    incrementCounter(Statistic.INVOCATION_CREATE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(f);
+    return createOutputStream(key, replication, overwrite, true);
+  }
+
+  @Override
+  public FSDataOutputStream createNonRecursive(Path path,
+      FsPermission permission,
+      EnumSet<CreateFlag> flags,
+      int bufferSize,
+      short replication,
+      long blockSize,
+      Progressable progress) throws IOException {
+    incrementCounter(Statistic.INVOCATION_CREATE_NON_RECURSIVE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(path);
+    return createOutputStream(key,
+        replication, flags.contains(CreateFlag.OVERWRITE), false);
+  }
+
+  private FSDataOutputStream createOutputStream(String key, short replication,
+      boolean overwrite, boolean recursive) throws IOException {
+    return new FSDataOutputStream(adapter.createFile(key,
+        replication, overwrite, recursive), statistics);
+  }
+
+  @Override
+  public FSDataOutputStream append(Path f, int bufferSize,
+      Progressable progress) throws IOException {
+    throw new UnsupportedOperationException("append() Not implemented by the "
+        + getClass().getSimpleName() + " FileSystem implementation");
+  }
+
+  private class RenameIterator extends OzoneListingIterator {
+    private final String srcPath;
+    private final String dstPath;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    RenameIterator(Path srcPath, Path dstPath)
+        throws IOException {
+      super(srcPath);
+      this.srcPath = pathToKey(srcPath);
+      this.dstPath = pathToKey(dstPath);
+      LOG.trace("rename from:{} to:{}", this.srcPath, this.dstPath);
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(srcPath);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) throws IOException {
+      String newPath = dstPath.concat(keyPath.substring(srcPath.length()));
+      adapterImpl.rename(this.bucket, keyPath, newPath);
+      return true;
+    }
+  }
+
+  /**
+   * Check whether the source and destination path are valid and then perform
+   * rename from source path to destination path.
+   * <p>
+   * The rename operation is performed by renaming the keys with src as prefix.
+   * For such keys the prefix is changed from src to dst.
+   *
+   * @param src source path for rename
+   * @param dst destination path for rename
+   * @return true if rename operation succeeded or
+   * if the src and dst have the same path and are of the same type
+   * @throws IOException on I/O errors or if the src/dst paths are invalid.
+   */
+  @Override
+  public boolean rename(Path src, Path dst) throws IOException {
+    incrementCounter(Statistic.INVOCATION_RENAME);
+    statistics.incrementWriteOps(1);
+    if (src.equals(dst)) {
+      return true;
+    }
+
+    LOG.trace("rename() from: {} to: {}", src, dst);
+    if (src.isRoot()) {
+      // Cannot rename root of file system
+      LOG.trace("Cannot rename the root of a filesystem");
+      return false;
+    }
+
+    // src and dst should be in the same bucket
+    OFSPath ofsSrc = new OFSPath(src);
+    OFSPath ofsDst = new OFSPath(dst);
+    if (!ofsSrc.isInSameBucketAs(ofsDst)) {
+      throw new IOException("Cannot rename a key to a different bucket");
+    }
+
+    // Cannot rename a directory to its own subdirectory
+    Path dstParent = dst.getParent();
+    while (dstParent != null && !src.equals(dstParent)) {
+      dstParent = dstParent.getParent();
+    }
+    Preconditions.checkArgument(dstParent == null,
+        "Cannot rename a directory to its own subdirectory");
+    // Check if the source exists
+    FileStatus srcStatus;
+    try {
+      srcStatus = getFileStatus(src);
+    } catch (FileNotFoundException fnfe) {
+      // source doesn't exist, return
+      return false;
+    }
+
+    // Check if the destination exists
+    FileStatus dstStatus;
+    try {
+      dstStatus = getFileStatus(dst);
+    } catch (FileNotFoundException fnde) {
+      dstStatus = null;
+    }
+
+    if (dstStatus == null) {
+      // If dst doesn't exist, check whether dst parent dir exists or not
+      // if the parent exists, the source can still be renamed to dst path
+      dstStatus = getFileStatus(dst.getParent());
+      if (!dstStatus.isDirectory()) {
+        throw new IOException(String.format(
+            "Failed to rename %s to %s, %s is a file", src, dst,
+            dst.getParent()));
+      }
+    } else {
+      // if dst exists and source and destination are same,
+      // check both the src and dst are of same type
+      if (srcStatus.getPath().equals(dstStatus.getPath())) {
+        return !srcStatus.isDirectory();
+      } else if (dstStatus.isDirectory()) {
+        // If dst is a directory, rename source as subpath of it.
+        // for example rename /source to /dst will lead to /dst/source
+        dst = new Path(dst, src.getName());
+        FileStatus[] statuses;
+        try {
+          statuses = listStatus(dst);
+        } catch (FileNotFoundException fnde) {
+          statuses = null;
+        }
+
+        if (statuses != null && statuses.length > 0) {
+          // If dst exists and not a directory not empty
+          throw new FileAlreadyExistsException(String.format(
+              "Failed to rename %s to %s, file already exists or not empty!",
+              src, dst));
+        }
+      } else {
+        // If dst is not a directory
+        throw new FileAlreadyExistsException(String.format(
+            "Failed to rename %s to %s, file already exists!", src, dst));
+      }
+    }
+
+    if (srcStatus.isDirectory()) {
+      if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) {
+        LOG.trace("Cannot rename a directory to a subdirectory of self");
+        return false;
+      }
+    }
+    RenameIterator iterator = new RenameIterator(src, dst);
+    boolean result = iterator.iterate();
+    if (result) {
+      createFakeParentDirectory(src);
+    }
+    return result;
+  }
+
+  private class DeleteIterator extends OzoneListingIterator {
+    final private boolean recursive;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    DeleteIterator(Path f, boolean recursive)
+        throws IOException {
+      super(f);
+      this.recursive = recursive;
+      if (getStatus().isDirectory()
+          && !this.recursive
+          && listStatus(f).length != 0) {
+        throw new PathIsNotEmptyDirectoryException(f.toString());
+      }
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(f);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) {
+      if (keyPath.equals("")) {
+        LOG.trace("Skipping deleting root directory");
+        return true;
+      } else {
+        LOG.trace("Deleting: {}", keyPath);
+        boolean succeed = adapterImpl.deleteObject(this.bucket, keyPath);
+        // if recursive delete is requested ignore the return value of
+        // deleteObject and issue deletes for other keys.
+        return recursive || succeed;
+      }
+    }
+  }
+
+  /**
+   * Deletes the children of the input dir path by iterating though the
+   * DeleteIterator.
+   *
+   * @param f directory path to be deleted
+   * @return true if successfully deletes all required keys, false otherwise
+   * @throws IOException
+   */
+  private boolean innerDelete(Path f, boolean recursive) throws IOException {
+    LOG.trace("delete() path:{} recursive:{}", f, recursive);
+    try {
+      DeleteIterator iterator = new DeleteIterator(f, recursive);
+      return iterator.iterate();
+    } catch (FileNotFoundException e) {
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Couldn't delete {} - does not exist", f);
+      }
+      return false;
+    }
+  }
+
+  @Override
+  public boolean delete(Path f, boolean recursive) throws IOException {
+    incrementCounter(Statistic.INVOCATION_DELETE);
+    statistics.incrementWriteOps(1);
+    LOG.debug("Delete path {} - recursive {}", f, recursive);
+    FileStatus status;
+    try {
+      status = getFileStatus(f);
+    } catch (FileNotFoundException ex) {
+      LOG.warn("delete: Path does not exist: {}", f);
+      return false;
+    }
+
+    if (status == null) {
+      return false;
+    }
+
+    String key = pathToKey(f);
+    boolean result;
+
+    if (status.isDirectory()) {
+      LOG.debug("delete: Path is a directory: {}", f);
+      OFSPath ofsPath = new OFSPath(key);
+
+      // Handle rm root
+      if (ofsPath.isRoot()) {
+        // Intentionally drop support for rm root
+        // because it is too dangerous and doesn't provide much value
+        LOG.warn("delete: OFS does not support rm root. "
+            + "To wipe the cluster, please re-init OM instead.");
+        return false;
+      }
+
+      // Handle delete volume
+      if (ofsPath.isVolume()) {
+        String volumeName = ofsPath.getVolumeName();
+        if (recursive) {
+          // Delete all buckets first
+          OzoneVolume volume =
+              adapterImpl.getObjectStore().getVolume(volumeName);

Review comment:
       The main idea behind the adapter is that we can use only methods on the adapter which provides a clean definition of the used ozone method. I would the usage of getObjectStore() as it leaks the internal methods.
   
   The original goal of adapter to support different classloaders, but still seems to be a good design pattern to use an `adapter.getVolume` instead.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] removed a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644420293


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [41 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='8.4%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [8.4% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.9%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.9% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-643381693


   The only test failure after HDDS-3767 merge is it-hdds-om TestOzoneManagerHAWithData, which is a flaky test also seen on master branch: https://elek.github.io/ozone-build-results/ Thanks Marton for this useful page.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r437759639



##########
File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
##########
@@ -0,0 +1,904 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.Progressable;
+import org.apache.http.client.utils.URIBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_EMPTY;
+
+/**
+ * The minimal Ozone Filesystem implementation.
+ * <p>
+ * This is a basic version which doesn't extend
+ * KeyProviderTokenIssuer and doesn't include statistics. It can be used
+ * from older hadoop version. For newer hadoop version use the full featured
+ * BasicRootedOzoneFileSystem.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class BasicRootedOzoneFileSystem extends FileSystem {
+  static final Logger LOG =
+      LoggerFactory.getLogger(BasicRootedOzoneFileSystem.class);
+
+  /**
+   * The Ozone client for connecting to Ozone server.
+   */
+
+  private URI uri;
+  private String userName;
+  private Path workingDir;
+  private OzoneClientAdapter adapter;
+  private BasicRootedOzoneClientAdapterImpl adapterImpl;

Review comment:
       `this.adapterImpl = (BasicRootedOzoneClientAdapterImpl) this.adapter;`
   
   this was intended to make the usage of it cleaner




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] removed a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] removed a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644195491


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [41 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='8.4%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [8.4% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.9%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.9% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-643381693


   The only test failure after HDDS-3767 merge is it-hdds-om `TestOzoneManagerHAWithData#testOMRestart`, which is a flaky test also seen on master branch: https://elek.github.io/ozone-build-results/
   Thanks Marton for this useful page.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **decrease** coverage by `1.19%`.
   > The diff coverage is `0.00%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   - Coverage     69.45%   68.26%   -1.20%     
   - Complexity     9112     9113       +1     
   ============================================
     Files           961      965       +4     
     Lines         48148    48929     +781     
     Branches       4679     4788     +109     
   ============================================
   - Hits          33443    33401      -42     
   - Misses        12486    13317     +831     
   + Partials       2219     2211       -8     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...p/ozone/om/ratis/utils/OzoneManagerRatisUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy91dGlscy9Pem9uZU1hbmFnZXJSYXRpc1V0aWxzLmphdmE=) | `67.44% <0.00%> (-19.13%)` | `39.00% <0.00%> (ø%)` | |
   | [...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh) | `72.41% <0.00%> (-17.25%)` | `6.00% <0.00%> (ø%)` | |
   | [.../apache/hadoop/hdds/scm/node/StaleNodeHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL25vZGUvU3RhbGVOb2RlSGFuZGxlci5qYXZh) | `88.88% <0.00%> (-11.12%)` | `4.00% <0.00%> (ø%)` | |
   | ... and [42 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...039300d](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-641415084


   > Thanks to drive this effort @smengcl. Overall it looks good to me. I agree that we are in the last step before the merge.
   > 
   > I have some questions about the code.
   > 
   > (And I feel myself guilty of the conflict, I can explain the changes on the master what I did, or I can help to rebase it.)
   
   Thanks for the comment @elek .
   
   The merge conflict comes from HDDS-3627 ([commit](https://github.com/apache/hadoop-ozone/commit/072370b947416d89fae11d00a84a1d9a6b31beaa)) as far as I can tell. Shouldn't be a big problem. It is always a delight to see good refactoring. :)
   
   A question though. I notice `TestOzoneFileSystemWithMocks` being removed in HDDS-3627, where in OFS I forked it to create `TestRootedOzoneFileSystemWithMocks`. Should I relocate the latter to somewhere else or just remove it as well? For now I will move it under `hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/` to be in the same place with `TestOzoneFSInputStream.java`. The merge conflict resolution work is being done in HDDS-3767.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644228910


   Thanks @elek for kicking off another 2 runs. I just pushed another empty commit for a new run.
   
   We might want to exclude (by rebasing) the empty commits from feature branch when we merge is to master? @elek 
   My plan is to merge the `HDDS-2665-ofs` branch to `master` by running
   ```bash
   git merge --no-ff HDDS-2665-ofs
   ```
   when on `master` branch, as described in the **Merging a feature branch** section on [this Hadoop Wiki page|https://cwiki.apache.org/confluence/display/HADOOP2/HowToCommit].


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r437758264



##########
File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##########
@@ -0,0 +1,876 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneKeyDetails;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.VolumeArgs;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.ozone.security.acl.OzoneAclConfig;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_FOUND;
+
+/**
+ * Ozone file system tests that are not covered by contract tests.
+ * TODO: Refactor this and TestOzoneFileSystem later to reduce code duplication.
+ */
+public class TestRootedOzoneFileSystem {
+
+  @Rule
+  public Timeout globalTimeout = new Timeout(300_000);
+
+  private OzoneConfiguration conf;
+  private MiniOzoneCluster cluster = null;
+  private FileSystem fs;
+  private RootedOzoneFileSystem ofs;
+  private ObjectStore objectStore;
+  private static BasicRootedOzoneClientAdapterImpl adapter;
+
+  private String volumeName;
+  private String bucketName;
+  // Store path commonly used by tests that test functionality within a bucket
+  private Path testBucketPath;
+  private String rootPath;
+
+  @Before
+  public void init() throws Exception {
+    conf = new OzoneConfiguration();
+    cluster = MiniOzoneCluster.newBuilder(conf)
+        .setNumDatanodes(3)
+        .build();
+    cluster.waitForClusterToBeReady();
+    objectStore = cluster.getClient().getObjectStore();
+
+    // create a volume and a bucket to be used by RootedOzoneFileSystem (OFS)
+    OzoneBucket bucket = TestDataUtil.createVolumeAndBucket(cluster);
+    volumeName = bucket.getVolumeName();
+    bucketName = bucket.getName();
+    String testBucketStr =
+        OZONE_URI_DELIMITER + volumeName + OZONE_URI_DELIMITER + bucketName;
+    testBucketPath = new Path(testBucketStr);
+
+    rootPath = String.format("%s://%s/",
+        OzoneConsts.OZONE_OFS_URI_SCHEME, conf.get(OZONE_OM_ADDRESS_KEY));
+
+    // Set the fs.defaultFS and start the filesystem
+    conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, rootPath);
+    // Note: FileSystem#loadFileSystems won't load OFS class due to META-INF
+    //  hence this workaround.
+    conf.set("fs.ofs.impl", "org.apache.hadoop.fs.ozone.RootedOzoneFileSystem");
+    fs = FileSystem.get(conf);
+    ofs = (RootedOzoneFileSystem) fs;
+    adapter = (BasicRootedOzoneClientAdapterImpl) ofs.getAdapter();
+  }
+
+  @After
+  public void teardown() {
+    if (cluster != null) {
+      cluster.shutdown();
+    }
+    IOUtils.closeQuietly(fs);
+  }
+
+  @Test
+  public void testOzoneFsServiceLoader() throws IOException {
+    OzoneConfiguration confTestLoader = new OzoneConfiguration();
+    // Note: FileSystem#loadFileSystems won't load OFS class due to META-INF
+    //  hence this workaround.
+    confTestLoader.set("fs.ofs.impl",
+        "org.apache.hadoop.fs.ozone.RootedOzoneFileSystem");
+    Assert.assertEquals(FileSystem.getFileSystemClass(
+        OzoneConsts.OZONE_OFS_URI_SCHEME, confTestLoader),
+        RootedOzoneFileSystem.class);
+  }
+
+  @Test
+  public void testCreateDoesNotAddParentDirKeys() throws Exception {
+    Path grandparent = new Path(testBucketPath,
+        "testCreateDoesNotAddParentDirKeys");
+    Path parent = new Path(grandparent, "parent");
+    Path child = new Path(parent, "child");
+    ContractTestUtils.touch(fs, child);
+
+    OzoneKeyDetails key = getKey(child, false);
+    OFSPath childOFSPath = new OFSPath(child);
+    Assert.assertEquals(key.getName(), childOFSPath.getKeyName());
+
+    // Creating a child should not add parent keys to the bucket
+    try {
+      getKey(parent, true);
+    } catch (IOException ex) {
+      assertKeyNotFoundException(ex);
+    }
+
+    // List status on the parent should show the child file
+    Assert.assertEquals(
+        "List status of parent should include the 1 child file",
+        1L, fs.listStatus(parent).length);
+    Assert.assertTrue(
+        "Parent directory does not appear to be a directory",
+        fs.getFileStatus(parent).isDirectory());
+  }
+
+  @Test
+  public void testDeleteCreatesFakeParentDir() throws Exception {
+    Path grandparent = new Path(testBucketPath,
+        "testDeleteCreatesFakeParentDir");
+    Path parent = new Path(grandparent, "parent");
+    Path child = new Path(parent, "child");
+    ContractTestUtils.touch(fs, child);
+
+    // Verify that parent dir key does not exist
+    // Creating a child should not add parent keys to the bucket
+    try {
+      getKey(parent, true);
+    } catch (IOException ex) {
+      assertKeyNotFoundException(ex);
+    }
+
+    // Delete the child key
+    Assert.assertTrue(fs.delete(child, false));
+
+    // Deleting the only child should create the parent dir key if it does
+    // not exist
+    OFSPath parentOFSPath = new OFSPath(parent);
+    String parentKey = parentOFSPath.getKeyName() + "/";
+    OzoneKeyDetails parentKeyInfo = getKey(parent, true);
+    Assert.assertEquals(parentKey, parentKeyInfo.getName());
+
+    // Recursive delete with DeleteIterator
+    Assert.assertTrue(fs.delete(grandparent, true));
+  }
+
+  @Test
+  public void testListStatus() throws Exception {
+    Path parent = new Path(testBucketPath, "testListStatus");
+    Path file1 = new Path(parent, "key1");
+    Path file2 = new Path(parent, "key2");
+
+    FileStatus[] fileStatuses = ofs.listStatus(testBucketPath);
+    Assert.assertEquals("Should be empty", 0, fileStatuses.length);
+
+    ContractTestUtils.touch(fs, file1);
+    ContractTestUtils.touch(fs, file2);
+
+    fileStatuses = ofs.listStatus(testBucketPath);
+    Assert.assertEquals("Should have created parent",
+        1, fileStatuses.length);
+    Assert.assertEquals("Parent path doesn't match",
+        fileStatuses[0].getPath().toUri().getPath(), parent.toString());
+
+    // ListStatus on a directory should return all subdirs along with
+    // files, even if there exists a file and sub-dir with the same name.
+    fileStatuses = ofs.listStatus(parent);
+    Assert.assertEquals(
+        "FileStatus did not return all children of the directory",
+        2, fileStatuses.length);
+
+    // ListStatus should return only the immediate children of a directory.
+    Path file3 = new Path(parent, "dir1/key3");
+    Path file4 = new Path(parent, "dir1/key4");
+    ContractTestUtils.touch(fs, file3);
+    ContractTestUtils.touch(fs, file4);
+    fileStatuses = ofs.listStatus(parent);
+    Assert.assertEquals(
+        "FileStatus did not return all children of the directory",
+        3, fileStatuses.length);
+  }
+
+  /**
+   * OFS: Helper function for tests. Return a volume name that doesn't exist.
+   */
+  private String getRandomNonExistVolumeName() throws IOException {
+    final int numDigit = 5;
+    long retriesLeft = Math.round(Math.pow(10, 5));
+    String name = null;
+    while (name == null && retriesLeft-- > 0) {
+      name = "volume-" + RandomStringUtils.randomNumeric(numDigit);
+      // Check volume existence.
+      Iterator<? extends OzoneVolume> iter =
+          objectStore.listVolumesByUser(null, name, null);
+      if (iter.hasNext()) {
+        // If there is a match, try again.
+        // Note that volume name prefix match doesn't equal volume existence
+        //  but the check is sufficient for this test.
+        name = null;
+      }
+    }
+    if (retriesLeft <= 0) {
+      Assert.fail(
+          "Failed to generate random volume name that doesn't exist already.");
+    }
+    return name;
+  }
+
+  /**
+   * OFS: Test mkdir on volume, bucket and dir that doesn't exist.
+   */
+  @Test
+  public void testMkdirOnNonExistentVolumeBucketDir() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    String bucketNameLocal = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path root = new Path("/" + volumeNameLocal + "/" + bucketNameLocal);
+    Path dir1 = new Path(root, "dir1");
+    Path dir12 = new Path(dir1, "dir12");
+    Path dir2 = new Path(root, "dir2");
+    fs.mkdirs(dir12);
+    fs.mkdirs(dir2);
+
+    // Check volume and bucket existence, they should both be created.
+    OzoneVolume ozoneVolume = objectStore.getVolume(volumeNameLocal);
+    OzoneBucket ozoneBucket = ozoneVolume.getBucket(bucketNameLocal);
+    OFSPath ofsPathDir1 = new OFSPath(dir12);
+    String key = ofsPathDir1.getKeyName() + "/";
+    OzoneKeyDetails ozoneKeyDetails = ozoneBucket.getKey(key);
+    Assert.assertEquals(key, ozoneKeyDetails.getName());
+
+    // Verify that directories are created.
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        fileStatuses[0].getPath().toUri().getPath(), dir1.toString());
+    Assert.assertEquals(
+        fileStatuses[1].getPath().toUri().getPath(), dir2.toString());
+
+    fileStatuses = ofs.listStatus(dir1);
+    Assert.assertEquals(
+        fileStatuses[0].getPath().toUri().getPath(), dir12.toString());
+    fileStatuses = ofs.listStatus(dir12);
+    Assert.assertEquals(fileStatuses.length, 0);
+    fileStatuses = ofs.listStatus(dir2);
+    Assert.assertEquals(fileStatuses.length, 0);
+  }
+
+  /**
+   * OFS: Test mkdir on a volume and bucket that doesn't exist.
+   */
+  @Test
+  public void testMkdirNonExistentVolumeBucket() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    String bucketNameLocal = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path newVolBucket = new Path(
+        "/" + volumeNameLocal + "/" + bucketNameLocal);
+    fs.mkdirs(newVolBucket);
+
+    // Verify with listVolumes and listBuckets
+    Iterator<? extends OzoneVolume> iterVol =
+        objectStore.listVolumesByUser(null, volumeNameLocal, null);
+    OzoneVolume ozoneVolume = iterVol.next();
+    Assert.assertNotNull(ozoneVolume);
+    Assert.assertEquals(volumeNameLocal, ozoneVolume.getName());
+
+    Iterator<? extends OzoneBucket> iterBuc =
+        ozoneVolume.listBuckets("bucket-");
+    OzoneBucket ozoneBucket = iterBuc.next();
+    Assert.assertNotNull(ozoneBucket);
+    Assert.assertEquals(bucketNameLocal, ozoneBucket.getName());
+
+    // TODO: Use listStatus to check volume and bucket creation in HDDS-2928.
+  }
+
+  /**
+   * OFS: Test mkdir on a volume that doesn't exist.
+   */
+  @Test
+  public void testMkdirNonExistentVolume() throws Exception {
+    String volumeNameLocal = getRandomNonExistVolumeName();
+    Path newVolume = new Path("/" + volumeNameLocal);
+    fs.mkdirs(newVolume);
+
+    // Verify with listVolumes and listBuckets
+    Iterator<? extends OzoneVolume> iterVol =
+        objectStore.listVolumesByUser(null, volumeNameLocal, null);
+    OzoneVolume ozoneVolume = iterVol.next();
+    Assert.assertNotNull(ozoneVolume);
+    Assert.assertEquals(volumeNameLocal, ozoneVolume.getName());
+
+    // TODO: Use listStatus to check volume and bucket creation in HDDS-2928.
+  }
+
+  /**
+   * OFS: Test getFileStatus on root.
+   */
+  @Test
+  public void testGetFileStatusRoot() throws Exception {
+    Path root = new Path("/");
+    FileStatus fileStatus = fs.getFileStatus(root);
+    Assert.assertNotNull(fileStatus);
+    Assert.assertEquals(new Path(rootPath), fileStatus.getPath());
+    Assert.assertTrue(fileStatus.isDirectory());
+    Assert.assertEquals(FsPermission.getDirDefault(),
+        fileStatus.getPermission());
+  }
+
+  /**
+   * Test listStatus operation in a bucket.
+   */
+  @Test
+  public void testListStatusInBucket() throws Exception {
+    Path root = new Path("/" + volumeName + "/" + bucketName);
+    Path dir1 = new Path(root, "dir1");
+    Path dir12 = new Path(dir1, "dir12");
+    Path dir2 = new Path(root, "dir2");
+    fs.mkdirs(dir12);
+    fs.mkdirs(dir2);
+
+    // ListStatus on root should return dir1 (even though /dir1 key does not
+    // exist) and dir2 only. dir12 is not an immediate child of root and
+    // hence should not be listed.
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        "FileStatus should return only the immediate children",
+        2, fileStatuses.length);
+
+    // Verify that dir12 is not included in the result of the listStatus on root
+    String fileStatus1 = fileStatuses[0].getPath().toUri().getPath();
+    String fileStatus2 = fileStatuses[1].getPath().toUri().getPath();
+    Assert.assertNotEquals(fileStatus1, dir12.toString());
+    Assert.assertNotEquals(fileStatus2, dir12.toString());
+  }
+
+  /**
+   * Tests listStatus operation on root directory.
+   */
+  @Test
+  public void testListStatusOnLargeDirectory() throws Exception {
+    Path root = new Path("/" + volumeName + "/" + bucketName);
+    Set<String> paths = new TreeSet<>();
+    int numDirs = LISTING_PAGE_SIZE + LISTING_PAGE_SIZE / 2;
+    for(int i = 0; i < numDirs; i++) {
+      Path p = new Path(root, String.valueOf(i));
+      fs.mkdirs(p);
+      paths.add(p.getName());
+    }
+
+    FileStatus[] fileStatuses = ofs.listStatus(root);
+    Assert.assertEquals(
+        "Total directories listed do not match the existing directories",
+        numDirs, fileStatuses.length);
+
+    for (int i=0; i < numDirs; i++) {
+      Assert.assertTrue(paths.contains(fileStatuses[i].getPath().getName()));
+    }
+  }
+
+  /**
+   * Tests listStatus on a path with subdirs.
+   */
+  @Test
+  public void testListStatusOnSubDirs() throws Exception {
+    // Create the following key structure
+    //      /dir1/dir11/dir111
+    //      /dir1/dir12
+    //      /dir1/dir12/file121
+    //      /dir2
+    // ListStatus on /dir1 should return all its immediated subdirs only
+    // which are /dir1/dir11 and /dir1/dir12. Super child files/dirs
+    // (/dir1/dir12/file121 and /dir1/dir11/dir111) should not be returned by
+    // listStatus.
+    Path dir1 = new Path(testBucketPath, "dir1");
+    Path dir11 = new Path(dir1, "dir11");
+    Path dir111 = new Path(dir11, "dir111");
+    Path dir12 = new Path(dir1, "dir12");
+    Path file121 = new Path(dir12, "file121");
+    Path dir2 = new Path(testBucketPath, "dir2");
+    fs.mkdirs(dir111);
+    fs.mkdirs(dir12);
+    ContractTestUtils.touch(fs, file121);
+    fs.mkdirs(dir2);
+
+    FileStatus[] fileStatuses = ofs.listStatus(dir1);
+    Assert.assertEquals(
+        "FileStatus should return only the immediate children",
+        2, fileStatuses.length);
+
+    // Verify that the two children of /dir1 returned by listStatus operation
+    // are /dir1/dir11 and /dir1/dir12.
+    String fileStatus1 = fileStatuses[0].getPath().toUri().getPath();
+    String fileStatus2 = fileStatuses[1].getPath().toUri().getPath();
+    Assert.assertTrue(fileStatus1.equals(dir11.toString()) ||
+        fileStatus1.equals(dir12.toString()));
+    Assert.assertTrue(fileStatus2.equals(dir11.toString()) ||
+        fileStatus2.equals(dir12.toString()));
+  }
+
+  @Test
+  public void testNonExplicitlyCreatedPathExistsAfterItsLeafsWereRemoved()
+      throws Exception {
+    Path source = new Path(testBucketPath, "source");
+    Path interimPath = new Path(source, "interimPath");
+    Path leafInsideInterimPath = new Path(interimPath, "leaf");
+    Path target = new Path(testBucketPath, "target");
+    Path leafInTarget = new Path(target, "leaf");
+
+    fs.mkdirs(source);
+    fs.mkdirs(target);
+    fs.mkdirs(leafInsideInterimPath);
+
+    Assert.assertTrue(fs.rename(leafInsideInterimPath, leafInTarget));
+
+    // after rename listStatus for interimPath should succeed and
+    // interimPath should have no children
+    FileStatus[] statuses = fs.listStatus(interimPath);
+    Assert.assertNotNull("liststatus returns a null array", statuses);
+    Assert.assertEquals("Statuses array is not empty", 0, statuses.length);
+    FileStatus fileStatus = fs.getFileStatus(interimPath);
+    Assert.assertEquals("FileStatus does not point to interimPath",
+        interimPath.getName(), fileStatus.getPath().getName());
+  }
+
+  /**
+   * OFS: Try to rename a key to a different bucket. The attempt should fail.
+   */
+  @Test
+  public void testRenameToDifferentBucket() throws IOException {
+    Path source = new Path(testBucketPath, "source");
+    Path interimPath = new Path(source, "interimPath");
+    Path leafInsideInterimPath = new Path(interimPath, "leaf");
+    Path target = new Path(testBucketPath, "target");
+
+    fs.mkdirs(source);
+    fs.mkdirs(target);
+    fs.mkdirs(leafInsideInterimPath);
+
+    // Attempt to rename the key to a different bucket
+    Path bucket2 = new Path(OZONE_URI_DELIMITER + volumeName +
+        OZONE_URI_DELIMITER + bucketName + "test");
+    Path leafInTargetInAnotherBucket = new Path(bucket2, "leaf");
+    try {
+      fs.rename(leafInsideInterimPath, leafInTargetInAnotherBucket);
+      Assert.fail(
+          "Should have thrown exception when renaming to a different bucket");
+    } catch (IOException ignored) {
+      // Test passed. Exception thrown as expected.
+    }
+  }
+
+  private OzoneKeyDetails getKey(Path keyPath, boolean isDirectory)
+      throws IOException {
+    String key = ofs.pathToKey(keyPath);
+    if (isDirectory) {
+      key = key + OZONE_URI_DELIMITER;
+    }
+    OFSPath ofsPath = new OFSPath(key);
+    String keyInBucket = ofsPath.getKeyName();
+    return cluster.getClient().getObjectStore().getVolume(volumeName)
+        .getBucket(bucketName).getKey(keyInBucket);
+  }
+
+  private void assertKeyNotFoundException(IOException ex) {
+    GenericTestUtils.assertExceptionContains("KEY_NOT_FOUND", ex);
+  }
+
+  /**
+   * Helper function for testListStatusRootAndVolume*.
+   * Each call creates one volume, one bucket under that volume,
+   * two dir under that bucket, one subdir under one of the dirs,
+   * and one file under the subdir.
+   */
+  private Path createRandomVolumeBucketWithDirs() throws IOException {
+    String volume1 = getRandomNonExistVolumeName();
+    String bucket1 = "bucket-" + RandomStringUtils.randomNumeric(5);
+    Path bucketPath1 = new Path(
+        OZONE_URI_DELIMITER + volume1 + OZONE_URI_DELIMITER + bucket1);
+
+    Path dir1 = new Path(bucketPath1, "dir1");
+    fs.mkdirs(dir1);  // Intentionally creating this "in-the-middle" dir key
+    Path subdir1 = new Path(dir1, "subdir1");
+    fs.mkdirs(subdir1);
+    Path dir2 = new Path(bucketPath1, "dir2");
+    fs.mkdirs(dir2);
+
+    try (FSDataOutputStream stream =
+        ofs.create(new Path(dir2, "file1"))) {
+      stream.write(1);
+    }
+
+    return bucketPath1;
+  }
+
+  /**
+   * OFS: Test non-recursive listStatus on root and volume.
+   */
+  @Test
+  public void testListStatusRootAndVolumeNonRecursive() throws Exception {
+    Path bucketPath1 = createRandomVolumeBucketWithDirs();
+    createRandomVolumeBucketWithDirs();
+    // listStatus("/volume/bucket")
+    FileStatus[] fileStatusBucket = ofs.listStatus(bucketPath1);
+    Assert.assertEquals(2, fileStatusBucket.length);
+    // listStatus("/volume")
+    Path volume = new Path(
+        OZONE_URI_DELIMITER + new OFSPath(bucketPath1).getVolumeName());
+    FileStatus[] fileStatusVolume = ofs.listStatus(volume);
+    Assert.assertEquals(1, fileStatusVolume.length);
+    // listStatus("/")
+    Path root = new Path(OZONE_URI_DELIMITER);
+    FileStatus[] fileStatusRoot = ofs.listStatus(root);
+    Assert.assertEquals(2, fileStatusRoot.length);
+  }
+
+  /**
+   * Helper function to do FileSystem#listStatus recursively.
+   * Simulate what FsShell does, using DFS.
+   */
+  private void listStatusRecursiveHelper(Path curPath, List<FileStatus> result)
+      throws IOException {
+    FileStatus[] startList = ofs.listStatus(curPath);
+    for (FileStatus fileStatus : startList) {
+      result.add(fileStatus);
+      if (fileStatus.isDirectory()) {
+        Path nextPath = fileStatus.getPath();
+        listStatusRecursiveHelper(nextPath, result);
+      }
+    }
+  }
+
+  /**
+   * Helper function to call listStatus in adapter implementation.
+   */
+  private List<FileStatus> callAdapterListStatus(String pathStr,
+      boolean recursive, String startPath, long numEntries) throws IOException {
+    return adapter.listStatus(pathStr, recursive, startPath, numEntries,
+        ofs.getUri(), ofs.getWorkingDirectory(), ofs.getUsername())
+        .stream().map(ofs::convertFileStatus).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to compare recursive listStatus results from adapter
+   * and (simulated) FileSystem.
+   */
+  private void listStatusCheckHelper(Path path) throws IOException {
+    // Get recursive listStatus result directly from adapter impl
+    List<FileStatus> statusesFromAdapter = callAdapterListStatus(
+        path.toString(), true, "", 1000);
+    // Get recursive listStatus result with FileSystem API by simulating FsShell
+    List<FileStatus> statusesFromFS = new ArrayList<>();
+    listStatusRecursiveHelper(path, statusesFromFS);
+    // Compare. The results would be in the same order due to assumptions:
+    // 1. They are both using DFS internally;
+    // 2. They both return ordered results.
+    Assert.assertEquals(statusesFromAdapter.size(), statusesFromFS.size());
+    final int n = statusesFromFS.size();
+    for (int i = 0; i < n; i++) {
+      FileStatus statusFromAdapter = statusesFromAdapter.get(i);
+      FileStatus statusFromFS = statusesFromFS.get(i);
+      Assert.assertEquals(statusFromAdapter.getPath(), statusFromFS.getPath());
+      Assert.assertEquals(statusFromAdapter.getLen(), statusFromFS.getLen());
+      Assert.assertEquals(statusFromAdapter.isDirectory(),
+          statusFromFS.isDirectory());
+      // TODO: When HDDS-3054 is in, uncomment the lines below.

Review comment:
       Addressed in HDDS-3767.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-638971553


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [6 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [4 Security Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [67 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='0.0%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [0.0% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='15.8%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [15.8% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **decrease** coverage by `1.12%`.
   > The diff coverage is `0.00%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   - Coverage     69.45%   68.33%   -1.13%     
   - Complexity     9112     9120       +8     
   ============================================
     Files           961      965       +4     
     Lines         48148    48950     +802     
     Branches       4679     4791     +112     
   ============================================
   + Hits          33443    33450       +7     
   - Misses        12486    13291     +805     
   + Partials       2219     2209      -10     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | [...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh) | `72.41% <0.00%> (-17.25%)` | `6.00% <0.00%> (ø%)` | |
   | [.../apache/hadoop/hdds/scm/node/StaleNodeHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL25vZGUvU3RhbGVOb2RlSGFuZGxlci5qYXZh) | `88.88% <0.00%> (-11.12%)` | `4.00% <0.00%> (ø%)` | |
   | [...va/org/apache/hadoop/hdds/utils/db/RDBMetrics.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy91dGlscy9kYi9SREJNZXRyaWNzLmphdmE=) | `85.71% <0.00%> (-7.15%)` | `13.00% <0.00%> (-1.00%)` | |
   | ... and [23 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...0a972d9](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-641415084


   > Thanks to drive this effort @smengcl. Overall it looks good to me. I agree that we are in the last step before the merge.
   > 
   > I have some questions about the code.
   > 
   > (And I feel myself guilty of the conflict, I can explain the changes on the master what I did, or I can help to rebase it.)
   
   Thanks for the comment @elek .
   
   The merge conflict comes from HDDS-3627 ([commit](https://github.com/apache/hadoop-ozone/commit/072370b947416d89fae11d00a84a1d9a6b31beaa)) as far as I can tell. Shouldn't be a big problem. It is always a delight to see good refactoring. :)
   
   ~~A question though. I notice `TestOzoneFileSystemWithMocks` being removed in HDDS-3627, where in OFS I forked it to create `TestRootedOzoneFileSystemWithMocks`. Should I relocate the latter to somewhere else or just remove it as well? For now I will move it under `hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/` to be in the same place with `TestOzoneFSInputStream.java`.~~ The merge conflict resolution work is being done in HDDS-3767.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **increase** coverage by `0.98%`.
   > The diff coverage is `71.82%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   + Coverage     69.45%   70.44%   +0.98%     
   - Complexity     9112     9376     +264     
   ============================================
     Files           961      965       +4     
     Lines         48148    48932     +784     
     Branches       4679     4788     +109     
   ============================================
   + Hits          33443    34469    +1026     
   + Misses        12486    12148     -338     
   - Partials       2219     2315      +96     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `70.05% <0.00%> (+70.05%)` | `28.00 <0.00> (+28.00)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `75.24% <ø> (+75.24%)` | `51.00 <0.00> (+51.00)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `41.66% <41.66%> (ø)` | `2.00 <2.00> (?)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `68.45% <68.45%> (ø)` | `47.00 <47.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `74.40% <74.40%> (ø)` | `50.00 <50.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `79.59% <79.59%> (ø)` | `37.00 <37.00> (?)` | |
   | [...otocol/commands/RetriableDatanodeEventWatcher.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL3Byb3RvY29sL2NvbW1hbmRzL1JldHJpYWJsZURhdGFub2RlRXZlbnRXYXRjaGVyLmphdmE=) | `55.55% <0.00%> (-44.45%)` | `3.00% <0.00%> (-1.00%)` | |
   | [...p/ozone/om/ratis/utils/OzoneManagerRatisUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy91dGlscy9Pem9uZU1hbmFnZXJSYXRpc1V0aWxzLmphdmE=) | `67.44% <0.00%> (-19.13%)` | `39.00% <0.00%> (ø%)` | |
   | [...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh) | `72.41% <0.00%> (-17.25%)` | `6.00% <0.00%> (ø%)` | |
   | ... and [46 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...c420b9a](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r437760431



##########
File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
##########
@@ -0,0 +1,904 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.Progressable;
+import org.apache.http.client.utils.URIBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_EMPTY;
+
+/**
+ * The minimal Ozone Filesystem implementation.
+ * <p>
+ * This is a basic version which doesn't extend
+ * KeyProviderTokenIssuer and doesn't include statistics. It can be used
+ * from older hadoop version. For newer hadoop version use the full featured
+ * BasicRootedOzoneFileSystem.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class BasicRootedOzoneFileSystem extends FileSystem {
+  static final Logger LOG =
+      LoggerFactory.getLogger(BasicRootedOzoneFileSystem.class);
+
+  /**
+   * The Ozone client for connecting to Ozone server.
+   */
+
+  private URI uri;
+  private String userName;
+  private Path workingDir;
+  private OzoneClientAdapter adapter;
+  private BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+  private static final String URI_EXCEPTION_TEXT =
+      "URL should be one of the following formats: " +
+      "ofs://om-service-id/path/to/key  OR " +
+      "ofs://om-host.example.com/path/to/key  OR " +
+      "ofs://om-host.example.com:5678/path/to/key";
+
+  @Override
+  public void initialize(URI name, Configuration conf) throws IOException {
+    super.initialize(name, conf);
+    setConf(conf);
+    Objects.requireNonNull(name.getScheme(), "No scheme provided in " + name);
+    Preconditions.checkArgument(getScheme().equals(name.getScheme()),
+        "Invalid scheme provided in " + name);
+
+    String authority = name.getAuthority();
+    if (authority == null) {
+      // authority is null when fs.defaultFS is not a qualified ofs URI and
+      // ofs:/// is passed to the client. matcher will NPE if authority is null
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+
+    String omHostOrServiceId;
+    int omPort = -1;
+    // Parse hostname and port
+    String[] parts = authority.split(":");
+    if (parts.length > 2) {
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+    omHostOrServiceId = parts[0];
+    if (parts.length == 2) {
+      try {
+        omPort = Integer.parseInt(parts[1]);
+      } catch (NumberFormatException e) {
+        throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+      }
+    }
+
+    try {
+      uri = new URIBuilder().setScheme(OZONE_OFS_URI_SCHEME)
+          .setHost(authority)
+          .build();
+      LOG.trace("Ozone URI for OFS initialization is " + uri);
+
+      //isolated is the default for ozonefs-lib-legacy which includes the
+      // /ozonefs.txt, otherwise the default is false. It could be overridden.
+      boolean defaultValue =
+          BasicRootedOzoneFileSystem.class.getClassLoader()
+              .getResource("ozonefs.txt") != null;
+
+      //Use string here instead of the constant as constant may not be available
+      //on the classpath of a hadoop 2.7
+      boolean isolatedClassloader =
+          conf.getBoolean("ozone.fs.isolated-classloader", defaultValue);
+
+      ConfigurationSource source;
+      if (conf instanceof OzoneConfiguration) {
+        source = (ConfigurationSource) conf;
+      } else {
+        source = new LegacyHadoopConfigurationSource(conf);
+      }
+      this.adapter =
+          createAdapter(source,
+              omHostOrServiceId, omPort,
+              isolatedClassloader);
+      this.adapterImpl = (BasicRootedOzoneClientAdapterImpl) this.adapter;
+
+      try {
+        this.userName =
+            UserGroupInformation.getCurrentUser().getShortUserName();
+      } catch (IOException e) {
+        this.userName = OZONE_DEFAULT_USER;
+      }
+      this.workingDir = new Path(OZONE_USER_DIR, this.userName)
+          .makeQualified(this.uri, this.workingDir);
+    } catch (URISyntaxException ue) {
+      final String msg = "Invalid Ozone endpoint " + name;
+      LOG.error(msg, ue);
+      throw new IOException(msg, ue);
+    }
+  }
+
+  protected OzoneClientAdapter createAdapter(ConfigurationSource conf,
+      String omHost, int omPort, boolean isolatedClassloader)
+      throws IOException {
+
+    if (isolatedClassloader) {
+      return OzoneClientAdapterFactory.createAdapter();
+    } else {
+      return new BasicRootedOzoneClientAdapterImpl(omHost, omPort, conf);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    try {
+      adapter.close();
+    } finally {
+      super.close();
+    }
+  }
+
+  @Override
+  public URI getUri() {
+    return uri;
+  }
+
+  @Override
+  public String getScheme() {
+    return OZONE_OFS_URI_SCHEME;
+  }
+
+  @Override
+  public FSDataInputStream open(Path path, int bufferSize) throws IOException {
+    incrementCounter(Statistic.INVOCATION_OPEN);
+    statistics.incrementReadOps(1);
+    LOG.trace("open() path: {}", path);
+    final String key = pathToKey(path);
+    return new FSDataInputStream(
+        new OzoneFSInputStream(adapter.readFile(key), statistics));
+  }
+
+  protected void incrementCounter(Statistic statistic) {
+    //don't do anything in this default implementation.
+  }
+
+  @Override
+  public FSDataOutputStream create(Path f, FsPermission permission,
+      boolean overwrite, int bufferSize,
+      short replication, long blockSize,
+      Progressable progress) throws IOException {
+    LOG.trace("create() path:{}", f);
+    incrementCounter(Statistic.INVOCATION_CREATE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(f);
+    return createOutputStream(key, replication, overwrite, true);
+  }
+
+  @Override
+  public FSDataOutputStream createNonRecursive(Path path,
+      FsPermission permission,
+      EnumSet<CreateFlag> flags,
+      int bufferSize,
+      short replication,
+      long blockSize,
+      Progressable progress) throws IOException {
+    incrementCounter(Statistic.INVOCATION_CREATE_NON_RECURSIVE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(path);
+    return createOutputStream(key,
+        replication, flags.contains(CreateFlag.OVERWRITE), false);
+  }
+
+  private FSDataOutputStream createOutputStream(String key, short replication,
+      boolean overwrite, boolean recursive) throws IOException {
+    return new FSDataOutputStream(adapter.createFile(key,
+        replication, overwrite, recursive), statistics);
+  }
+
+  @Override
+  public FSDataOutputStream append(Path f, int bufferSize,
+      Progressable progress) throws IOException {
+    throw new UnsupportedOperationException("append() Not implemented by the "
+        + getClass().getSimpleName() + " FileSystem implementation");
+  }
+
+  private class RenameIterator extends OzoneListingIterator {
+    private final String srcPath;
+    private final String dstPath;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    RenameIterator(Path srcPath, Path dstPath)
+        throws IOException {
+      super(srcPath);
+      this.srcPath = pathToKey(srcPath);
+      this.dstPath = pathToKey(dstPath);
+      LOG.trace("rename from:{} to:{}", this.srcPath, this.dstPath);
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(srcPath);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) throws IOException {
+      String newPath = dstPath.concat(keyPath.substring(srcPath.length()));
+      adapterImpl.rename(this.bucket, keyPath, newPath);
+      return true;
+    }
+  }
+
+  /**
+   * Check whether the source and destination path are valid and then perform
+   * rename from source path to destination path.
+   * <p>
+   * The rename operation is performed by renaming the keys with src as prefix.
+   * For such keys the prefix is changed from src to dst.
+   *
+   * @param src source path for rename
+   * @param dst destination path for rename
+   * @return true if rename operation succeeded or
+   * if the src and dst have the same path and are of the same type
+   * @throws IOException on I/O errors or if the src/dst paths are invalid.
+   */
+  @Override
+  public boolean rename(Path src, Path dst) throws IOException {
+    incrementCounter(Statistic.INVOCATION_RENAME);
+    statistics.incrementWriteOps(1);
+    if (src.equals(dst)) {
+      return true;
+    }
+
+    LOG.trace("rename() from: {} to: {}", src, dst);
+    if (src.isRoot()) {
+      // Cannot rename root of file system
+      LOG.trace("Cannot rename the root of a filesystem");
+      return false;
+    }
+
+    // src and dst should be in the same bucket
+    OFSPath ofsSrc = new OFSPath(src);
+    OFSPath ofsDst = new OFSPath(dst);
+    if (!ofsSrc.isInSameBucketAs(ofsDst)) {
+      throw new IOException("Cannot rename a key to a different bucket");
+    }
+
+    // Cannot rename a directory to its own subdirectory
+    Path dstParent = dst.getParent();
+    while (dstParent != null && !src.equals(dstParent)) {
+      dstParent = dstParent.getParent();
+    }
+    Preconditions.checkArgument(dstParent == null,
+        "Cannot rename a directory to its own subdirectory");
+    // Check if the source exists
+    FileStatus srcStatus;
+    try {
+      srcStatus = getFileStatus(src);
+    } catch (FileNotFoundException fnfe) {
+      // source doesn't exist, return
+      return false;
+    }
+
+    // Check if the destination exists
+    FileStatus dstStatus;
+    try {
+      dstStatus = getFileStatus(dst);
+    } catch (FileNotFoundException fnde) {
+      dstStatus = null;
+    }
+
+    if (dstStatus == null) {
+      // If dst doesn't exist, check whether dst parent dir exists or not
+      // if the parent exists, the source can still be renamed to dst path
+      dstStatus = getFileStatus(dst.getParent());
+      if (!dstStatus.isDirectory()) {
+        throw new IOException(String.format(
+            "Failed to rename %s to %s, %s is a file", src, dst,
+            dst.getParent()));
+      }
+    } else {
+      // if dst exists and source and destination are same,
+      // check both the src and dst are of same type
+      if (srcStatus.getPath().equals(dstStatus.getPath())) {
+        return !srcStatus.isDirectory();
+      } else if (dstStatus.isDirectory()) {
+        // If dst is a directory, rename source as subpath of it.
+        // for example rename /source to /dst will lead to /dst/source
+        dst = new Path(dst, src.getName());
+        FileStatus[] statuses;
+        try {
+          statuses = listStatus(dst);
+        } catch (FileNotFoundException fnde) {
+          statuses = null;
+        }
+
+        if (statuses != null && statuses.length > 0) {
+          // If dst exists and not a directory not empty
+          throw new FileAlreadyExistsException(String.format(
+              "Failed to rename %s to %s, file already exists or not empty!",
+              src, dst));
+        }
+      } else {
+        // If dst is not a directory
+        throw new FileAlreadyExistsException(String.format(
+            "Failed to rename %s to %s, file already exists!", src, dst));
+      }
+    }
+
+    if (srcStatus.isDirectory()) {
+      if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) {
+        LOG.trace("Cannot rename a directory to a subdirectory of self");
+        return false;
+      }
+    }
+    RenameIterator iterator = new RenameIterator(src, dst);
+    boolean result = iterator.iterate();
+    if (result) {
+      createFakeParentDirectory(src);
+    }
+    return result;
+  }
+
+  private class DeleteIterator extends OzoneListingIterator {
+    final private boolean recursive;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    DeleteIterator(Path f, boolean recursive)
+        throws IOException {
+      super(f);
+      this.recursive = recursive;
+      if (getStatus().isDirectory()
+          && !this.recursive
+          && listStatus(f).length != 0) {
+        throw new PathIsNotEmptyDirectoryException(f.toString());
+      }
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(f);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) {
+      if (keyPath.equals("")) {
+        LOG.trace("Skipping deleting root directory");
+        return true;
+      } else {
+        LOG.trace("Deleting: {}", keyPath);
+        boolean succeed = adapterImpl.deleteObject(this.bucket, keyPath);
+        // if recursive delete is requested ignore the return value of
+        // deleteObject and issue deletes for other keys.
+        return recursive || succeed;
+      }
+    }
+  }
+
+  /**
+   * Deletes the children of the input dir path by iterating though the
+   * DeleteIterator.
+   *
+   * @param f directory path to be deleted
+   * @return true if successfully deletes all required keys, false otherwise
+   * @throws IOException
+   */
+  private boolean innerDelete(Path f, boolean recursive) throws IOException {
+    LOG.trace("delete() path:{} recursive:{}", f, recursive);
+    try {
+      DeleteIterator iterator = new DeleteIterator(f, recursive);
+      return iterator.iterate();
+    } catch (FileNotFoundException e) {
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Couldn't delete {} - does not exist", f);
+      }
+      return false;
+    }
+  }
+
+  @Override
+  public boolean delete(Path f, boolean recursive) throws IOException {
+    incrementCounter(Statistic.INVOCATION_DELETE);
+    statistics.incrementWriteOps(1);
+    LOG.debug("Delete path {} - recursive {}", f, recursive);
+    FileStatus status;
+    try {
+      status = getFileStatus(f);
+    } catch (FileNotFoundException ex) {
+      LOG.warn("delete: Path does not exist: {}", f);
+      return false;
+    }
+
+    if (status == null) {
+      return false;
+    }
+
+    String key = pathToKey(f);
+    boolean result;
+
+    if (status.isDirectory()) {
+      LOG.debug("delete: Path is a directory: {}", f);
+      OFSPath ofsPath = new OFSPath(key);
+
+      // Handle rm root
+      if (ofsPath.isRoot()) {
+        // Intentionally drop support for rm root
+        // because it is too dangerous and doesn't provide much value
+        LOG.warn("delete: OFS does not support rm root. "
+            + "To wipe the cluster, please re-init OM instead.");
+        return false;
+      }
+
+      // Handle delete volume
+      if (ofsPath.isVolume()) {
+        String volumeName = ofsPath.getVolumeName();
+        if (recursive) {
+          // Delete all buckets first
+          OzoneVolume volume =
+              adapterImpl.getObjectStore().getVolume(volumeName);

Review comment:
       Yes I agree. When implementing OFS volume and bucket deletion I did realize I can't put the recursion logic entirely in adapter. Hence the hacky one.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] codecov-commenter edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-642834625


   # [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=h1) Report
   > Merging [#1021](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=desc) into [master](https://codecov.io/gh/apache/hadoop-ozone/commit/f7fcadc0511afb2ad650843bfb03f7538a69b144&el=desc) will **increase** coverage by `1.42%`.
   > The diff coverage is `71.82%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1021      +/-   ##
   ============================================
   + Coverage     69.45%   70.87%   +1.42%     
   - Complexity     9112     9702     +590     
   ============================================
     Files           961      965       +4     
     Lines         48148    50496    +2348     
     Branches       4679     5071     +392     
   ============================================
   + Hits          33443    35791    +2348     
   + Misses        12486    12348     -138     
   - Partials       2219     2357     +138     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==) | `84.21% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [...e/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `70.05% <0.00%> (+70.05%)` | `28.00 <0.00> (+28.00)` | |
   | [...g/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `75.24% <ø> (+75.24%)` | `51.00 <0.00> (+51.00)` | |
   | [.../hadoop/fs/ozone/RootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvUm9vdGVkT3pvbmVDbGllbnRBZGFwdGVySW1wbC5qYXZh) | `41.66% <41.66%> (ø)` | `2.00 <2.00> (?)` | |
   | [...op/fs/ozone/BasicRootedOzoneClientAdapterImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUNsaWVudEFkYXB0ZXJJbXBsLmphdmE=) | `68.45% <68.45%> (ø)` | `47.00 <47.00> (?)` | |
   | [...he/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvQmFzaWNSb290ZWRPem9uZUZpbGVTeXN0ZW0uamF2YQ==) | `74.40% <74.40%> (ø)` | `50.00 <50.00> (?)` | |
   | [.../main/java/org/apache/hadoop/fs/ozone/OFSPath.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lZnMtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvZnMvb3pvbmUvT0ZTUGF0aC5qYXZh) | `79.59% <79.59%> (ø)` | `37.00 <37.00> (?)` | |
   | [...p/ozone/om/ratis/utils/OzoneManagerRatisUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy91dGlscy9Pem9uZU1hbmFnZXJSYXRpc1V0aWxzLmphdmE=) | `75.00% <0.00%> (-11.57%)` | `76.00% <0.00%> (+37.00%)` | :arrow_down: |
   | [...che/hadoop/ozone/om/ratis/OMRatisSnapshotInfo.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy9PTVJhdGlzU25hcHNob3RJbmZvLmphdmE=) | `90.00% <0.00%> (-4.00%)` | `17.00% <0.00%> (+5.00%)` | :arrow_down: |
   | [...apache/hadoop/hdds/utils/db/RocksDBCheckpoint.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy91dGlscy9kYi9Sb2Nrc0RCQ2hlY2twb2ludC5qYXZh) | `87.17% <0.00%> (-2.83%)` | `10.00% <0.00%> (+2.00%)` | :arrow_down: |
   | ... and [43 more](https://codecov.io/gh/apache/hadoop-ozone/pull/1021/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=footer). Last update [f7fcadc...c420b9a](https://codecov.io/gh/apache/hadoop-ozone/pull/1021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] asfgit merged pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
asfgit merged pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644963282


   I'm going to force push to the feature branch to remove the empty commits. All 7 runs, for the record:
   
   1. https://github.com/apache/hadoop-ozone/runs/762529340
   2. https://github.com/apache/hadoop-ozone/runs/764622657
   3. https://github.com/apache/hadoop-ozone/runs/773164764 (clean run)
   4. https://github.com/apache/hadoop-ozone/runs/773357450
   5. https://github.com/apache/hadoop-ozone/runs/774192890
   6. https://github.com/apache/hadoop-ozone/runs/777297914
   7. https://github.com/apache/hadoop-ozone/runs/777677781 (clean run)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-641465198


   Some of the [bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) that showed up in SonarCloud doesn't relate to OFS (e.g. `BasicOzoneFileSystem`/`BasicOzoneClientAdapterImpl`).
   
   ~~I will only be addressing the ones in OFS. i.e. `BasicRootedOzoneClientAdapterImpl` in HDDS-3767.~~
   
   A closer look reveals that the bug isn't really a bug. If I "address" it by closing `ozoneOutputStream` in `finally`, the returning `OzoneFSOutputStream` will contain a stream that is already closed.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-641465198


   Some of the [bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) that showed up in SonarCloud doesn't relate to OFS (e.g. `BasicOzoneFileSystem`/`BasicOzoneClientAdapterImpl`).
   
   I will only be addressing the ones in OFS. i.e. `BasicRootedOzoneClientAdapterImpl` in HDDS-3767.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r437760431



##########
File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
##########
@@ -0,0 +1,904 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.Progressable;
+import org.apache.http.client.utils.URIBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.fs.ozone.Constants.LISTING_PAGE_SIZE;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_DEFAULT_USER;
+import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.VOLUME_NOT_EMPTY;
+
+/**
+ * The minimal Ozone Filesystem implementation.
+ * <p>
+ * This is a basic version which doesn't extend
+ * KeyProviderTokenIssuer and doesn't include statistics. It can be used
+ * from older hadoop version. For newer hadoop version use the full featured
+ * BasicRootedOzoneFileSystem.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class BasicRootedOzoneFileSystem extends FileSystem {
+  static final Logger LOG =
+      LoggerFactory.getLogger(BasicRootedOzoneFileSystem.class);
+
+  /**
+   * The Ozone client for connecting to Ozone server.
+   */
+
+  private URI uri;
+  private String userName;
+  private Path workingDir;
+  private OzoneClientAdapter adapter;
+  private BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+  private static final String URI_EXCEPTION_TEXT =
+      "URL should be one of the following formats: " +
+      "ofs://om-service-id/path/to/key  OR " +
+      "ofs://om-host.example.com/path/to/key  OR " +
+      "ofs://om-host.example.com:5678/path/to/key";
+
+  @Override
+  public void initialize(URI name, Configuration conf) throws IOException {
+    super.initialize(name, conf);
+    setConf(conf);
+    Objects.requireNonNull(name.getScheme(), "No scheme provided in " + name);
+    Preconditions.checkArgument(getScheme().equals(name.getScheme()),
+        "Invalid scheme provided in " + name);
+
+    String authority = name.getAuthority();
+    if (authority == null) {
+      // authority is null when fs.defaultFS is not a qualified ofs URI and
+      // ofs:/// is passed to the client. matcher will NPE if authority is null
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+
+    String omHostOrServiceId;
+    int omPort = -1;
+    // Parse hostname and port
+    String[] parts = authority.split(":");
+    if (parts.length > 2) {
+      throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+    }
+    omHostOrServiceId = parts[0];
+    if (parts.length == 2) {
+      try {
+        omPort = Integer.parseInt(parts[1]);
+      } catch (NumberFormatException e) {
+        throw new IllegalArgumentException(URI_EXCEPTION_TEXT);
+      }
+    }
+
+    try {
+      uri = new URIBuilder().setScheme(OZONE_OFS_URI_SCHEME)
+          .setHost(authority)
+          .build();
+      LOG.trace("Ozone URI for OFS initialization is " + uri);
+
+      //isolated is the default for ozonefs-lib-legacy which includes the
+      // /ozonefs.txt, otherwise the default is false. It could be overridden.
+      boolean defaultValue =
+          BasicRootedOzoneFileSystem.class.getClassLoader()
+              .getResource("ozonefs.txt") != null;
+
+      //Use string here instead of the constant as constant may not be available
+      //on the classpath of a hadoop 2.7
+      boolean isolatedClassloader =
+          conf.getBoolean("ozone.fs.isolated-classloader", defaultValue);
+
+      ConfigurationSource source;
+      if (conf instanceof OzoneConfiguration) {
+        source = (ConfigurationSource) conf;
+      } else {
+        source = new LegacyHadoopConfigurationSource(conf);
+      }
+      this.adapter =
+          createAdapter(source,
+              omHostOrServiceId, omPort,
+              isolatedClassloader);
+      this.adapterImpl = (BasicRootedOzoneClientAdapterImpl) this.adapter;
+
+      try {
+        this.userName =
+            UserGroupInformation.getCurrentUser().getShortUserName();
+      } catch (IOException e) {
+        this.userName = OZONE_DEFAULT_USER;
+      }
+      this.workingDir = new Path(OZONE_USER_DIR, this.userName)
+          .makeQualified(this.uri, this.workingDir);
+    } catch (URISyntaxException ue) {
+      final String msg = "Invalid Ozone endpoint " + name;
+      LOG.error(msg, ue);
+      throw new IOException(msg, ue);
+    }
+  }
+
+  protected OzoneClientAdapter createAdapter(ConfigurationSource conf,
+      String omHost, int omPort, boolean isolatedClassloader)
+      throws IOException {
+
+    if (isolatedClassloader) {
+      return OzoneClientAdapterFactory.createAdapter();
+    } else {
+      return new BasicRootedOzoneClientAdapterImpl(omHost, omPort, conf);
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    try {
+      adapter.close();
+    } finally {
+      super.close();
+    }
+  }
+
+  @Override
+  public URI getUri() {
+    return uri;
+  }
+
+  @Override
+  public String getScheme() {
+    return OZONE_OFS_URI_SCHEME;
+  }
+
+  @Override
+  public FSDataInputStream open(Path path, int bufferSize) throws IOException {
+    incrementCounter(Statistic.INVOCATION_OPEN);
+    statistics.incrementReadOps(1);
+    LOG.trace("open() path: {}", path);
+    final String key = pathToKey(path);
+    return new FSDataInputStream(
+        new OzoneFSInputStream(adapter.readFile(key), statistics));
+  }
+
+  protected void incrementCounter(Statistic statistic) {
+    //don't do anything in this default implementation.
+  }
+
+  @Override
+  public FSDataOutputStream create(Path f, FsPermission permission,
+      boolean overwrite, int bufferSize,
+      short replication, long blockSize,
+      Progressable progress) throws IOException {
+    LOG.trace("create() path:{}", f);
+    incrementCounter(Statistic.INVOCATION_CREATE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(f);
+    return createOutputStream(key, replication, overwrite, true);
+  }
+
+  @Override
+  public FSDataOutputStream createNonRecursive(Path path,
+      FsPermission permission,
+      EnumSet<CreateFlag> flags,
+      int bufferSize,
+      short replication,
+      long blockSize,
+      Progressable progress) throws IOException {
+    incrementCounter(Statistic.INVOCATION_CREATE_NON_RECURSIVE);
+    statistics.incrementWriteOps(1);
+    final String key = pathToKey(path);
+    return createOutputStream(key,
+        replication, flags.contains(CreateFlag.OVERWRITE), false);
+  }
+
+  private FSDataOutputStream createOutputStream(String key, short replication,
+      boolean overwrite, boolean recursive) throws IOException {
+    return new FSDataOutputStream(adapter.createFile(key,
+        replication, overwrite, recursive), statistics);
+  }
+
+  @Override
+  public FSDataOutputStream append(Path f, int bufferSize,
+      Progressable progress) throws IOException {
+    throw new UnsupportedOperationException("append() Not implemented by the "
+        + getClass().getSimpleName() + " FileSystem implementation");
+  }
+
+  private class RenameIterator extends OzoneListingIterator {
+    private final String srcPath;
+    private final String dstPath;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    RenameIterator(Path srcPath, Path dstPath)
+        throws IOException {
+      super(srcPath);
+      this.srcPath = pathToKey(srcPath);
+      this.dstPath = pathToKey(dstPath);
+      LOG.trace("rename from:{} to:{}", this.srcPath, this.dstPath);
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(srcPath);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) throws IOException {
+      String newPath = dstPath.concat(keyPath.substring(srcPath.length()));
+      adapterImpl.rename(this.bucket, keyPath, newPath);
+      return true;
+    }
+  }
+
+  /**
+   * Check whether the source and destination path are valid and then perform
+   * rename from source path to destination path.
+   * <p>
+   * The rename operation is performed by renaming the keys with src as prefix.
+   * For such keys the prefix is changed from src to dst.
+   *
+   * @param src source path for rename
+   * @param dst destination path for rename
+   * @return true if rename operation succeeded or
+   * if the src and dst have the same path and are of the same type
+   * @throws IOException on I/O errors or if the src/dst paths are invalid.
+   */
+  @Override
+  public boolean rename(Path src, Path dst) throws IOException {
+    incrementCounter(Statistic.INVOCATION_RENAME);
+    statistics.incrementWriteOps(1);
+    if (src.equals(dst)) {
+      return true;
+    }
+
+    LOG.trace("rename() from: {} to: {}", src, dst);
+    if (src.isRoot()) {
+      // Cannot rename root of file system
+      LOG.trace("Cannot rename the root of a filesystem");
+      return false;
+    }
+
+    // src and dst should be in the same bucket
+    OFSPath ofsSrc = new OFSPath(src);
+    OFSPath ofsDst = new OFSPath(dst);
+    if (!ofsSrc.isInSameBucketAs(ofsDst)) {
+      throw new IOException("Cannot rename a key to a different bucket");
+    }
+
+    // Cannot rename a directory to its own subdirectory
+    Path dstParent = dst.getParent();
+    while (dstParent != null && !src.equals(dstParent)) {
+      dstParent = dstParent.getParent();
+    }
+    Preconditions.checkArgument(dstParent == null,
+        "Cannot rename a directory to its own subdirectory");
+    // Check if the source exists
+    FileStatus srcStatus;
+    try {
+      srcStatus = getFileStatus(src);
+    } catch (FileNotFoundException fnfe) {
+      // source doesn't exist, return
+      return false;
+    }
+
+    // Check if the destination exists
+    FileStatus dstStatus;
+    try {
+      dstStatus = getFileStatus(dst);
+    } catch (FileNotFoundException fnde) {
+      dstStatus = null;
+    }
+
+    if (dstStatus == null) {
+      // If dst doesn't exist, check whether dst parent dir exists or not
+      // if the parent exists, the source can still be renamed to dst path
+      dstStatus = getFileStatus(dst.getParent());
+      if (!dstStatus.isDirectory()) {
+        throw new IOException(String.format(
+            "Failed to rename %s to %s, %s is a file", src, dst,
+            dst.getParent()));
+      }
+    } else {
+      // if dst exists and source and destination are same,
+      // check both the src and dst are of same type
+      if (srcStatus.getPath().equals(dstStatus.getPath())) {
+        return !srcStatus.isDirectory();
+      } else if (dstStatus.isDirectory()) {
+        // If dst is a directory, rename source as subpath of it.
+        // for example rename /source to /dst will lead to /dst/source
+        dst = new Path(dst, src.getName());
+        FileStatus[] statuses;
+        try {
+          statuses = listStatus(dst);
+        } catch (FileNotFoundException fnde) {
+          statuses = null;
+        }
+
+        if (statuses != null && statuses.length > 0) {
+          // If dst exists and not a directory not empty
+          throw new FileAlreadyExistsException(String.format(
+              "Failed to rename %s to %s, file already exists or not empty!",
+              src, dst));
+        }
+      } else {
+        // If dst is not a directory
+        throw new FileAlreadyExistsException(String.format(
+            "Failed to rename %s to %s, file already exists!", src, dst));
+      }
+    }
+
+    if (srcStatus.isDirectory()) {
+      if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) {
+        LOG.trace("Cannot rename a directory to a subdirectory of self");
+        return false;
+      }
+    }
+    RenameIterator iterator = new RenameIterator(src, dst);
+    boolean result = iterator.iterate();
+    if (result) {
+      createFakeParentDirectory(src);
+    }
+    return result;
+  }
+
+  private class DeleteIterator extends OzoneListingIterator {
+    final private boolean recursive;
+    private final OzoneBucket bucket;
+    private final BasicRootedOzoneClientAdapterImpl adapterImpl;
+
+    DeleteIterator(Path f, boolean recursive)
+        throws IOException {
+      super(f);
+      this.recursive = recursive;
+      if (getStatus().isDirectory()
+          && !this.recursive
+          && listStatus(f).length != 0) {
+        throw new PathIsNotEmptyDirectoryException(f.toString());
+      }
+      // Initialize bucket here to reduce number of RPC calls
+      OFSPath ofsPath = new OFSPath(f);
+      // TODO: Refactor later.
+      adapterImpl = (BasicRootedOzoneClientAdapterImpl) adapter;
+      this.bucket = adapterImpl.getBucket(ofsPath, false);
+    }
+
+    @Override
+    boolean processKeyPath(String keyPath) {
+      if (keyPath.equals("")) {
+        LOG.trace("Skipping deleting root directory");
+        return true;
+      } else {
+        LOG.trace("Deleting: {}", keyPath);
+        boolean succeed = adapterImpl.deleteObject(this.bucket, keyPath);
+        // if recursive delete is requested ignore the return value of
+        // deleteObject and issue deletes for other keys.
+        return recursive || succeed;
+      }
+    }
+  }
+
+  /**
+   * Deletes the children of the input dir path by iterating though the
+   * DeleteIterator.
+   *
+   * @param f directory path to be deleted
+   * @return true if successfully deletes all required keys, false otherwise
+   * @throws IOException
+   */
+  private boolean innerDelete(Path f, boolean recursive) throws IOException {
+    LOG.trace("delete() path:{} recursive:{}", f, recursive);
+    try {
+      DeleteIterator iterator = new DeleteIterator(f, recursive);
+      return iterator.iterate();
+    } catch (FileNotFoundException e) {
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Couldn't delete {} - does not exist", f);
+      }
+      return false;
+    }
+  }
+
+  @Override
+  public boolean delete(Path f, boolean recursive) throws IOException {
+    incrementCounter(Statistic.INVOCATION_DELETE);
+    statistics.incrementWriteOps(1);
+    LOG.debug("Delete path {} - recursive {}", f, recursive);
+    FileStatus status;
+    try {
+      status = getFileStatus(f);
+    } catch (FileNotFoundException ex) {
+      LOG.warn("delete: Path does not exist: {}", f);
+      return false;
+    }
+
+    if (status == null) {
+      return false;
+    }
+
+    String key = pathToKey(f);
+    boolean result;
+
+    if (status.isDirectory()) {
+      LOG.debug("delete: Path is a directory: {}", f);
+      OFSPath ofsPath = new OFSPath(key);
+
+      // Handle rm root
+      if (ofsPath.isRoot()) {
+        // Intentionally drop support for rm root
+        // because it is too dangerous and doesn't provide much value
+        LOG.warn("delete: OFS does not support rm root. "
+            + "To wipe the cluster, please re-init OM instead.");
+        return false;
+      }
+
+      // Handle delete volume
+      if (ofsPath.isVolume()) {
+        String volumeName = ofsPath.getVolumeName();
+        if (recursive) {
+          // Delete all buckets first
+          OzoneVolume volume =
+              adapterImpl.getObjectStore().getVolume(volumeName);

Review comment:
       Yes I agree. When implementing OFS volume and bucket deletion I tried to put the logic in adapter, but I realized if I want to do it the easy way (using recursion) I simply can't put the logic in adapter. Hence the hacky one you see here.
   
   I will refactor this chunk of code and remove `adapterImpl` in another refactoring jira later. But I think this shouldn't affect the merge.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
sonarcloud[bot] commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644195491


   SonarCloud Quality Gate failed.
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/bug.png' alt='Bug' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/E.png' alt='E' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG) [3 Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=BUG)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/vulnerability.png' alt='Vulnerability' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) [0 Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=VULNERABILITY) (and [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/security_hotspot.png' alt='Security Hotspot' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) [1 Security Hotspot](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=SECURITY_HOTSPOT) to review)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/code_smell.png' alt='Code Smell' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/RatingBadge/A.png' alt='A' width='16' height='16' />](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL) [41 Code Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone&pullRequest=1021&resolved=false&types=CODE_SMELL)
   
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/CoverageChart/0.png' alt='8.4%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list) [8.4% Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_coverage&view=list)  
   [<img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/checks/Duplications/20.png' alt='14.9%' width='16' height='16' />](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list) [14.9% Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone&pullRequest=1021&metric=new_duplicated_lines_density&view=list)
   
   <img src='https://sonarsource.github.io/sonarcloud-github-static-resources/v2/common/message_warning.png' alt='warning' width='16' height='16' /> The version of Java (1.8.0_232) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#discussion_r437758520



##########
File path: hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystemWithMocks.java
##########
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.powermock.api.mockito.PowerMockito;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.net.URI;
+
+import static org.junit.Assert.assertEquals;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Ozone File system tests that are light weight and use mocks.
+ */
+@RunWith(PowerMockRunner.class)
+@PrepareForTest({ OzoneClientFactory.class, UserGroupInformation.class })
+@PowerMockIgnore("javax.management.*")
+public class TestRootedOzoneFileSystemWithMocks {

Review comment:
       I removed `TestRootedOzoneFileSystemWithMocks` in HDDS-3767 since `TestOzoneFileSystemWithMocks` is also removed. We can restore this later.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl commented on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-644943883


   The last run (merge commit) is good. Will manually merge feature branch `HDDS-2665-ofs` shortly and close this PR.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #1021: HDDS-2665. Implement new Ozone Filesystem scheme ofs://

Posted by GitBox <gi...@apache.org>.
smengcl edited a comment on pull request #1021:
URL: https://github.com/apache/hadoop-ozone/pull/1021#issuecomment-643381693


   The only test failure after HDDS-3767 merge is it-hdds-om TestOzoneManagerHAWithData#testOMRestart, which is a flaky test also seen on master branch: https://elek.github.io/ozone-build-results/
   Thanks Marton for this useful page.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org