You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by in...@apache.org on 2019/02/20 22:21:26 UTC

[hadoop] branch HDFS-13891 updated (215e525 -> f476bb1)

This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a change to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard 215e525  HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.
 discard f94b6e3  HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri.
 discard 27671cf  HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena.
 discard 216490e  HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.
 discard 36aca66  HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui.
 discard ecd90a6  HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena.
 discard b9d94c7  HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar.
 discard d14f874  HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
 discard d37590b  HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena.
 discard caceff1  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.
 discard b1d9ff4  HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.
 discard 7fe0b06  HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.
 discard 6ee477b  HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan.
 discard 12911fa  HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena.
 discard f081b6f  HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.
 discard f97176b  HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar.
 discard c9a6545  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.
 discard d730107  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.
 discard 537d400  HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar.
 discard 8c245e7  HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.
 discard bfef765  HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui.
 discard 19dba1d  HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri.
 discard 82ce8b8  HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.
 discard fe50806  HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable.
 discard 38d0cd0  HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.
 discard 36abfe5  HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena.
 discard fe6a587  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
 discard 13a99a7  Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."
 discard f9fc534  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
 discard 2281c97  HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena.
 discard 864e5cc  HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar.
 discard d29090c  HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar.
 discard 1e3e157  HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.
 discard c38dc95  HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota.
 discard edb950e  HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng.
 discard e576c88  HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.
 discard e60b42a  HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri.
 discard d9f8e11  HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota.
 discard 8c20236  HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng.
 discard 31467e7  HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.
 discard 6efe263  HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena.
     add 0cb3316  HDDS-482. NullPointer exception thrown on console when cli operation failed. Contributed by Nandakumar.
     add 996ab48  HDDS-393. Audit Parser tool for processing ozone audit logs. Contributed by Dinesh Chitlangia.
     add cb26f15  HADOOP-16025. Update the year to 2019.
     add 21fe77e  HDFS-14184. [SPS] Add support for URI based path in satisfystoragepolicy command. Contributed by Ayush Saxena.
     add f660e5e  HDFS-14163. Debug Admin Command Should Support Generic Options. Contributed by Ayush Saxena.
     add 040a202  HADOOP-15323. AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore. Contributed wujinhu.
     add cfe89e6  YARN-9164. Shutdown NM may cause NPE when opportunistic container scheduling is enabled. Contributed by lujie.
     add 14d232c  HDDS-957. Replace incorrect use of system property user.name. Contributed by Dinesh Chitlangia.
     add ecdeaa7  HDFS-14084. Need for more stats in DFSClient. Contributed by Pranay Singh.
     add dfceffa  YARN-9147. Rmove auxiliary services when manifest file is removed.            Contributed by Billie Rinaldi
     add f4906ac  YARN-9038. [CSI] Add ability to publish/unpublish volumes on node managers. Contributed by Weiwei Yang.
     add 573b158  YARN-8567. Fetching yarn logs fails for long running application if it is not present in timeline store. Contributed by Tarun Parimi.
     add 8c6978c  YARN-6149. Allow port range to be specified while starting NM Timeline collector manager. Contributed by Abhishek Modi.
     add 51427cb  HADOOP-15997. KMS client uses wrong UGI after HADOOP-14445. Contributed by Wei-Chiu Chuang.
     add ddc0a40  HDDS-896. Handle over replicated containers in SCM. Contributed by Nandakumar.
     add f4e1824  HADOOP-16028. Fix NetworkTopology chooseRandom function to support excluded nodes. Contributed by Sihai Ke.
     add 6e35f71  YARN-9166. Fix logging for preemption of Opportunistic containers for Guaranteed containers. Contributed by Abhishek Modi.
     add d43af8b  HADOOP-15996.  Improved Kerberos username mapping strategy in Hadoop.                Contributed by Bolke de Bruin
     add 999da98  HDDS-915. Submit client request to OM Ratis server. Contributed by Hanisha Koneru.
     add 1f42527  Revert "HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu."
     add 650b9cb  YARN-9178. TestRMAdminCli#testHelp is failing in trunk. Contributed by Abhishek Modi.
     add 8f004fe  YARN-9141. [Submarine] JobStatus outputs with system UTC clock, not local clock. (Zac Zhou via wangda)
     add 2c02aa6  YARN-9160. [Submarine] Document 'PYTHONPATH' environment variable setting when using -localization options. (Zhankun Tang via wangda)
     add d3321fb  Revert "YARN-9178. TestRMAdminCli#testHelp is failing in trunk. Contributed by Abhishek Modi."
     add f87b3b1  HADOOP-16030. AliyunOSS: bring fixes back from HADOOP-15671. Contributed by wujinhu.
     add 944cf87  YARN-9173. FairShare calculation broken for large values after YARN-8833. Contributed by Wilfred Spiegelenburg.
     add 5db7c49  YARN-9162. Fix TestRMAdminCLI#testHelp. Contributed by Ayush Saxena.
     add 32d5caa  HADOOP-15937. [JDK 11] Update maven-shade-plugin.version to 3.2.1. Contributed by Dinesh Chitlangia.
     add d14c56d  HDDS-916. MultipartUpload: Complete Multipart upload request. Contributed by Bharat Viswanadham.
     add 992dd9d  HDDS-901. MultipartUpload: S3 API for Initiate multipart upload. Contributed by Bharat Viswanadham.
     add d66925a  HDDS-902. MultipartUpload: S3 API for uploading a part file. Contributed by Bharat Viswanadham.
     add d715233  HADOOP-14556. S3A to support Delegation Tokens.
     add 802932c  HADOOP-16031.  Fixed TestSecureLogins unit test.  Contributed by Akira Ajisaka
     add cdfbec4  HDDS-930. Multipart Upload: Abort multiupload request. Contributed by Bharat Viswanadham.
     add 06279ec  HDDS-946. AuditParser - insert audit to database in batches. contributed by Dinesh Chitlangia.
     add 0a01d49  YARN-8822. Nvidia-docker v2 support for YARN GPU feature. (Charo Zhang via wangda)
     add 0f26b5e  HDDS-931. Add documentation for ozone shell command providing ozone mapping for a S3Bucket. Contributed by Bharat Viswanadham.
     add 4894115  YARN-9169. Add metrics for queued opportunistic and guaranteed containers. Contributed by Abhishek Modi.
     add 7f78397  Revert "HADOOP-14556. S3A to support Delegation Tokens."
     add 4297e20  HDDS-926. Use Timeout rule for the the test methods in TestOzoneManager. Contributed by Dinesh Chitlangia.
     add 0921b70  YARN-9037. [CSI] Ignore volume resource in resource calculators based on tags. Contributed by Sunil Govindan.
     add 188bebb  HADOOP-16018. DistCp won't reassemble chunks when blocks per chunk > 0.
     add 396ffba  HDDS-968. Fix TestObjectPut failures. Contributed by Bharat Viswanadham.
     add 695e93c  HDDS-969. Fix TestOzoneManagerRatisServer test failure. Contributed by Bharat Viswanadham.
     add 999f31f  HDDS-924. MultipartUpload: S3 APi for complete Multipart Upload. Contributed by Bharat Viswanadham.
     add 1a08302  HDFS-14189. Fix intermittent failure of TestNameNodeMetrics. Contributed by Ayush Saxena.
     add 32cf041  HDDS-965. Ozone: checkstyle improvements and code quality scripts. Contributed by Elek, Marton.
     add 6a92346  YARN-6523. Optimize system credentials sent in node heartbeat responses. Contributed by Manikandan R
     add 4ab5260  HDFS-14132. Add BlockLocation.isStriped() to determine if block is replicated or Striped (Contributed by Shweta Yakkali via Daniel Templeton)
     add 709ddb1  HADOOP-15941. [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible.
     add 3420e26  HADOOP-16027. [DOC] Effective use of FS instances during S3A integration tests. Contributed by Gabor Bota.
     add 8dd11a1  HDDS-947. Implement OzoneManager State Machine.
     add f4617c6  Revert "HDDS-947. Implement OzoneManager State Machine."
     add c634589  Revert "HDFS-14084. Need for more stats in DFSClient. Contributed by Pranay Singh."
     add 2091d1a  HDDS-941. Rename ChunkGroupInputStream to keyInputStream and ChunkInputStream to BlockInputStream. Contributed by  Shashikant Banerjee.
     add e8d1900  HADOOP-16040. ABFS: Bug fix for tolerateOobAppends configuration.
     add 7211269  HADOOP-15662. Better exception handling of DNS errors.
     add 852701f  HADOOP-16036. WASB: Disable jetty logging configuration announcement.
     add 33c009a4 HADOOP-15909. KeyProvider class should implement Closeable. Contributed by Kuhu Shukla.
     add d4ca907  HADOOP-16016. TestSSLFactory#testServerWeakCiphers fails on Java 1.8.0_191 or upper
     add 9aeaaa0  HDFS-14198. Upload and Create button doesn't get enabled after getting reset. Contributed by Ayush Saxena.
     add dddad98  HADOOP-15975. ABFS: remove timeout check for DELETE and RENAME.
     add a4eefe5  HDDS-947. Implement OzoneManager State Machine.
     add fb8932a  HADOOP-16029. Consecutive StringBuilder.append can be reused. Contributed by Ayush Saxena.
     add 01cb958  HADOOP-16013. DecayRpcScheduler decay thread should run as a daemon. Contributed by Erik Krogen.
     add bf08f4a  HADOOP-15481. Emit FairCallQueue stats as metrics. Contributed by Christopher Gregorian.
     add 35fa3bd  HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.
     add 3bb745d  HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by lqjacklee.
     add 04fcbef  HADOOP-16043. NPE in ITestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is not set.
     add c4a00d1  HADOOP-15843. s3guard bucket-info command to not print a stack trace on bucket-not-found.
     add 6d0bffe  HADOOP-14556. S3A to support Delegation Tokens.
     add 30863c5  HADOOP-16044. ABFS: Better exception handling of DNS errors followup
     add 05c84ab  HDDS-977. Exclude dependency-reduced-pom.xml from ozone rat check. Contributed by Elek, Marton.
     add 614af50  YARN-9179. Fix NPE in AbstractYarnScheduler#updateNewContainerInfo.
     add ccc4362  HADOOP-16019. ZKDelegationTokenSecretManager won't log exception message occured in function setJaasConfiguration.
     add f280f52  HDDS-978. Fix typo in doc : Client > S3 section. Contributed by  Dinesh Chitlangia.
     add 01405df  HADOOP-15941. Addendum patch. Contributed by Takanobu Asanuma.
     add ff61931  HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.
     add d3920ec  HDDS-5. Enable OzoneManager kerberos auth. Contributed by Ajay Kumar.
     add 8e6743e  HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.
     add bfa4929  Revert "Bad merge with 996a627b289947af3894bf83e7b63ec702a665cd"
     add 914e93b  HDDS-7. Enable kerberos auth for Ozone client in hadoop rpc. Contributed by Ajay Kumar.
     add e47135d  HDDS-70. Fix config names for secure ksm and scm. Contributed by Ajay Kumar.
     add faf53f8  HDDS-100. SCM CA: generate public/private key pair for SCM/OM/DNs. Contributed by Ajay Kumar.
     add 570b503  Fix merge conflicts
     add 0b034b7  HDDS-546. Resolve bouncy castle dependency for hadoop-hdds-common. Contributed by Ajay Kumar.
     add 2d26944  HDDS-548. Create a Self-Signed Certificate. Contributed by Anu Engineer.
     add 9920506  HDDS-547. Fix secure docker and configs. Contributed by Xiaoyu Yao.
     add d451188  HDDS-566. Move OzoneSecure docker-compose after HDDS-447. Contributed by Xiaoyu Yao.
     add 0aab740  HDDS-10. Add kdc docker image for secure ozone cluster. Contributed by Ajay Kumar.
     add 8d7c5f4  HDDS-588. SelfSignedCertificate#generateCertificate should sign the certificate the configured security provider. Contributed by Xiaoyu Yao.
     add 16e0bb8  HDDS-591. Adding ASF license header to kadm5.acl. Contributed by Ajay Kumar.
     add e89c35a  HDDS-704. Fix the Dependency convergence issue on HDDS-4. Contributed by Xiaoyu Yao.
     add 61e85d7  HDDS-684. Fix HDDS-4 branch after HDDS-490 and HADOOP-15832. Contributed by Xiaoyu Yao.
     add c260c19  HDDS-101. SCM CA: generate CSR for SCM CA clients. Contributed by Xiaoyu Yao.
     add 33c274e  HDDS-103. SCM CA: Add new security protocol for SCM to expose security related functions. Contributed by Ajay Kumar.
     add 8b8a3f5  HDDS-760. Add asf license to TestCertificateSignRequest. Contributed by Ajay Kumar.
     add a28ad7a  HDDS-753. SCM security protocol server is not starting. Contributed by Ajay Kumar.
     add 53120e2  HDDS-592. Fix ozone-secure.robot test. Contributed by Ajay Kumar.
     add 6ad794b  HDDS-778. Add an interface for CA and Clients for Certificate operations Contributed by Anu Engineer.
     add 6d6b1a0  HDDS-836. Add TokenIdentifier Ozone for delegation token and block token. Contributed by Ajay Kumar.
     add bb4a26c  HDDS-8. Add OzoneManager Delegation Token support. Contributed by Ajay Kumar.
     add 7e27706  HDDS-9. Add GRPC protocol interceptors for Ozone Block Token. Contributed by Xiaoyu Yao.
     add 8253106  HDDS-873. Fix TestSecureOzoneContainer NPE after HDDS-837. Contributed by Xiaoyu Yao.
     add 0c8829a  HDDS-696. Bootstrap genesis SCM(CA) with self-signed certificate. Contributed by Anu Engineer.
     add 6d522dc  HDDS-804. Block token: Add secret token manager. Contributed by Ajay Kumar.
     add 417951a  HDDS-884. Fix merge issue that causes NPE OzoneManager#httpServer. Contributed by Xiaoyu Yao.
     add f894d86  HDDS-115. GRPC: Support secure gRPC endpoint with mTLS. Contributed by Xiaoyu Yao.
     add 2b11522  HDDS-929. Remove ozone.max.key.len property. Contributed by Ajay Kumar.
     add 50c4045  HDDS-805. Block token: Client api changes for block token. Contributed by Ajay Kumar.
     add ddaef67  HDDS-937. Create an S3 Auth Table. Contributed by Dinesh Chitlangia.
     add 924bea9  HDDS-102. SCM CA: SCM CA server signs certificate for approved CSR. Contributed by Anu Engineer.
     add 30bfc9c  HDDS-955. SCM CA: Add CA to SCM. Contributed by Anu Engineer.
     add 1d5734e  HDDS-938. Add Client APIs for using S3 Auth interface. Contributed by  Dinesh Chitlangia.
     add a5d0fcf  HDDS-963. Fix failure in TestOzoneShell due to null check in SecurityConfig. Contributed by Ajay Kumar.
     add 8978466  HDDS-945. Fix generics warnings in delegation token. Contributed by Ajay Kumar.
     add 0faa570  HDDS-964. Fix test failure in TestOmMetrics. Contributed by Ajay Kumar.
     add 0e16cf1  HDDS-970. Fix classnotfound error for bouncy castle classes in OM,SCM init. Contributed by Ajay Kumar.
     add 140565f  HDDS-967. Fix failures in TestOzoneConfigurationFields. Contributed by Ajay Kumar.
     add 01a7f9e  HDDS-597. Ratis: Support secure gRPC endpoint with mTLS for Ratis. Contributed by Ajay Kumar.
     add c0683ed  HDDS-960. Add cli command option for getS3Secret. Contributed by Dinesh Chitlangia.
     add 06c83d3  HDDS-984. Fix TestOzoneManagerRatisServer.testIsReadOnlyCapturesAllCmdTypeEnums. Contributed by Xiaoyu Yao.
     add 2aaaf12  HDDS-943. Add block token validation in HddsDispatcher/XceiverServer. Contributed by Ajay Kumar.
     add 6be3923  YARN-9150 Making TimelineSchemaCreator support different backends for Timeline Schema Creation in ATSv2. Contributed by Sushil Ks
     add 713ded6  YARN-9150 Making TimelineSchemaCreator support different backends for Timeline Schema Creation in ATSv2. Contributed by Sushil Ks
     add 104ef5d  YARN-8747. [UI2] YARN UI2 page loading failed due to js error under some time zone configuration. Contributed by collinma.
     add f048512  HDFS-14192. Track missing DFS operations in Statistics and StorageStatistics. Contributed by Ayush Saxena.
     add 54b11de  HDDS-898. Continue token should contain the previous dir in Ozone s3g object list. Contributed by Elek Marton.
     add 96ea464  HDDS-971. ContainerDataConstructor throws exception on QUASI_CLOSED and UNHEALTHY container state. Contributed by Lokesh Jain.
     add 0a46bae  YARN-9203. Fix typos in yarn-default.xml.
     add 6d7eedf  YARN-9194. Invalid event: REGISTERED and LAUNCH_FAILED at FAILED, and NullPointerException happens in RM while shutdown a NM. (lujie via wangda)
     add 96a84b6  HDFS-14213. Remove Jansson from BUILDING.txt. Contributed by Dinesh Chitlangia.
     add dacc1a7  HDFS-14175. EC: Native XOR decoder should reset the output buffer before using it. Contributed by Ayush Saxena.
     add 8c7f6b2  YARN-9197.  Add safe guard against NPE for component instance failure.             Contributed by kyungwan nam
     add 4ac0404  HDDS-959. KeyOutputStream should handle retry failures. Contributed by Lokesh Jain.
     add c26d354  HDDS-983. Rename S3Utils to avoid conflict with HDFS classes. Contributed by Bharat Viswanadham.
     add 751bc62  Merge branch 'HDDS-4' into trunk
     add 824dfa3  YARN-8489.  Support "dominant" component concept in YARN service.             Contributed by Zac Zhou
     add 27aa6e8  HADOOP-16046. [JDK 11] Correct the compiler exclusion of org/apache/hadoop/yarn/webapp/hamlet/** classes for >= Java 9. Contributed by Devaraj K.
     add abde1e1  YARN-9204. RM fails to start if absolute resource is specified for partition capacity in CS queues. Contributed by Jiandan Yang.
     add 2e2508b  Make 3.2.0 aware to other branches
     add e996224  Make 3.2.0 aware to other branches - jdiff
     add a463cf7  HADOOP-15787. [JDK11] TestIPC.testRTEDuringConnectionSetup fails. Contributed by Zsolt Venczel.
     add d43df31  YARN-9210. RM nodes web page can not display node info. Contributed by Jiandan Yang.
     add de34fc1  HDFS-14207. ZKFC should catch exception when ha configuration missing. Contributed by Fei Hui.
     add 1ff658b  HDFS-14221. Replace Guava Optional with Java Optional. Contributed by Arpit Agarwal.
     add 6f0756f  HDFS-14222. Make ThrottledAsyncChecker constructor public. Contributed by Arpit Agarwal.
     add 00ad9e2  HADOOP-16048. ABFS: Fix Date format parser.
     add 9390a0b  HDDS-913. Ozonefs defaultFs example is wrong in the documentation. Contributed by Supratim Deka.
     add 0ef54f7  HDDS-992. ozone-default.xml has invalid text from a stale merge. Contributed by  Dinesh Chitlangia.
     add 2fa9389  YARN-9146.  Added REST API to configure auxiliary service.             Contributed by Billie Rinaldi
     add 0dd35e2  HADOOP-15922. Fixed DelegationTokenAuthenticator URL decoding for doAs user.               Contributed by He Xiaoqiao
     add 7d6792e  HDFS-14218. EC: Ls -e throw NPE when directory ec policy is disabled. Contributed by Ayush Saxena.
     add e3e076d  YARN-9205. When using custom resource type, application will fail to run due to the CapacityScheduler throws InvalidResourceRequestException(GREATER_THEN_MAX_ALLOCATION). Contributed by Zhankun Tang.
     add e72e27e  HDDS-932. Add blockade Tests for Network partition. Contributed by Nilotpal Nandi.
     add 721d5c2  YARN-8101. Add UT to verify node-attributes in RM nodes rest API. Contributed by Prabhu Joseph.
     add 2d69a35  HDDS-982. Fix TestContainerDataYaml#testIncorrectContainerFile. Contributed by Doroszlai, Attila.
     add 221e308  HDFS-14153. [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS. Contributed by Ayush Saxena.
     add 0b91329  HDDS-764. Run S3 smoke tests with replication STANDARD. (#462)
     add 951cdd7  HDFS-14061. Check if the cluster topology supports the EC policy before setting, enabling or adding it. Contributed by Kitti Nanasi.
     add dcbc8b8  HDDS-975. Manage ozone security tokens with ozone shell cli. Contributed by Ajay Kumar.
     add f3e642d  HDFS-14185. Cleanup method calls to static Assert methods in TestAddStripedBlocks (Contributed by Shweta Yakkali via Daniel Templeton)
     add e321b91  HDDS-980. Adding getOMCertificate in SCMSecurityProtocol. Contributed by Ajay Kumar.
     add c726445  YARN-8961. [UI2] Flow Run End Time shows 'Invalid date'. Contributed by Akhil PB
     add a4bd64e  YARN-9116. Capacity Scheduler: implements queue level maximum-allocation inheritance. Contributed by Aihua Xu.
     add 09a5859  HDDS-993. Update hadoop version to 3.2.0. Contributed by Supratim Deka.
     add f3d8265  HDDS-996. Incorrect data length gets updated in OM by client in case it hits exception in multiple successive block writes. Contributed by Shashikant Banerjee.
     add 3c7d700  HDDS-1002. ozonesecure compose incompatible with smoke test. Contributed by Doroszlai, Attila.
     add 4e0aa2c  HDDS-948. MultipartUpload: S3 API for Abort Multipart Upload. Contributed by Bharat Viswanadham.
     add a33ef4f  YARN-8867. Added resource localization status to YARN service status call.            Contributed by Chandni Singh
     add 3c60303  HADOOP-16065. -Ddynamodb should be -Ddynamo in AWS SDK testing document.
     add c6d901a  HDDS-1006. AuditParser assumes incorrect log format. Contributed by Dinesh Chitlangia.
     add 8ff9578  HDDS-1007. Add robot test for AuditParser. Contributed by Dinesh Chitlangia.
     add 45c4cfe  HDDS-906. Display the ozone version on SCM/OM web ui instead of Hadoop version. Contributed by Doroszlai, Attila.
     add 2181b18  HDDS-990. Typos in Ozone doc. Contributed by Doroszlai, Attila.
     add 5dae1a0  HDDS-973. HDDS/Ozone fail to build on Windows. Contributed by Xiaoyu Yao.
     add a448b05  HDDS-1009. TestAbortMultipartUpload is missing the apache license text. Contributed by Dinesh Chitlangia.
     add 9fc7df8  HDDS-793. Support custom key/value annotations on volume/bucket/key. Contributed by Elek, Marton.
     add 84bb980  YARN-7761. [UI2] Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link. Contributed by Akhil PB.
     add 45caeee  HDFS-14228. Incorrect getSnapshottableDirListing() javadoc. Contributed by Dinesh Chitlangia.
     add 1d52327  HDFS-14084. Need for more stats in DFSClient. Contributed by Pranay Singh.
     add 2ec296e  HDDS-991. Fix failures in TestSecureOzoneCluster. Contributed by Ajay Kumar.
     add dc5af4c  HDFS-12729. Document special paths in HDFS. Contributed by Masatake Iwasaki.
     add 6cace58  YARN-9222. Print launchTime in ApplicationSummary
     add fb69519  HDDS-1011. Fix NPE BucketManagerImpl.setBucketProperty. Contributed by Xiaoyu Yao.
     add 1ab69a9  YARN-9221.  Added flag to disable dynamic auxiliary service feature.             Contributed by Billie Rinaldi
     add 91649c3  HDDS-1013. NPE while listing directories.
     add 47d6b9b  HADOOP-16075. Upgrade checkstyle version to 8.16.
     add 3b49d7a  HDDS-989. Check Hdds Volumes for errors. Contributed by Arpit Agarwal.
     add 8326450  HDDS-974. Add getServiceAddress method to ServiceInfo and use it in TestOzoneShell. Contributed by Doroszlai, Attila.
     add 2e636dd  YARN-9074. Consolidate docker removal logic in ContainerCleanup.            Contributed by Zhaohui Xin
     add f5a95f7  YARN-8901. Fixed restart policy NEVER/ON_FAILURE with component dependency.            Contributed by Suma Shivaprasad
     add 4f63ffe  YARN-9237. NM should ignore sending finished apps to RM during RM fail-over. Contributed by Jiandan Yang.
     add 2d06112  HDDS-1022. Add cmd type in getCommandResponse in SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.
     add 085f0e8  YARN-9086. [CSI] Run csi-driver-adaptor as aux service. Contributed by Weiwei Yang.
     add 5d578d0  HDDS-1004. SCMContainerManager#updateContainerStateInternal fails for QUASI_CLOSE and FORCE_CLOSE events. Contributed by Lokesh Jain.
     add 04105bb  YARN-6616: YARN AHS shows submitTime for jobs same as startTime. Contributed by  Prabhu Joseph
     add d1714c2  Revert "HDFS-14084. Need for more stats in DFSClient. Contributed by Pranay Singh."
     add b3bc94e  HDFS-14236. Lazy persist copy/ put fails with ViewFs.
     add 02eb918  HADOOP-16041. Include Hadoop version in User-Agent string for ABFS. Contributed by Shweta Yakkali.
     add 1129288  HADOOP-14178. Move Mockito up to version 2.23.4. Contributed by Akira Ajisaka and Masatake Iwasaki.
     add d583cc4  HDDS-1024. Handle DeleteContainerCommand in the SCMDatanodeProtocolServer. Contributed by Bharat Viswanadham.
     add 14441cc  HDDS-1032. Package builds are failing with missing org.mockito:mockito-core dependency version. Contributed by Doroszlai, Attila.
     add a3a9ae3  YARN-9251. Build failure for -Dhbase.profile=2.0. Contributed by Rohith Sharma K S.
     add 0e95ae4  HDDS-1030. Move auditparser robot tests under ozone basic. Contributed by Dinesh Chitlangia.
     add 7456fc9  HDDS-1031. Update ratis version to fix a DN restart Bug. Contributed by Bharat Viswanadham.
     add c354195  HDDS-1016. Allow marking containers as unhealthy. Contributed by Arpit Agarwal.
     add 945a61c  HDDS-549. Add support for key rename in Ozone Shell. Contributed by Doroszlai Attila.
     add 5372927  HDDS-1035. Intermittent TestRootList failure. Contributed by Doroszlai Attila.
     add 71c49fa  YARN-9099. GpuResourceAllocator#getReleasingGpus calculates number of GPUs in a wrong way. Contributed by Szilard Nemeth.
     add 033d97a  HDDS-956. MultipartUpload: List Parts for a Multipart upload key. Contributed by Bharat Viswanadham.
     add bcc3a79  HADOOP-16084. Fix the comment for getClass in Configuration. Contributed by Fengnan Li.
     add f738b39  YARN-9191. Add cli option in DS to support enforceExecutionType in resource requests. Contributed by Abhishek Modi.
     add 0ab7fc9  HDFS-14187. Make warning message more clear when there are not enough data nodes for EC write. Contributed by Kitti Nanasi.
     add 16195ea  HDDS-1025. Handle replication of closed containers in DeadNodeHanlder. Contributed by Bharat Viswanadham.
     add 13aa939  HDDS-997. Add blockade Tests for scm isolation and mixed node isolation. Contributed by Nilotpal Nandi.
     add 7f46d13  HADOOP-16079. Token.toString faulting if any token listed can't load.
     add 4123353  HDDS-1037. Fix the block discard logic in Ozone client. Contributed by Shashikant Banerjee.
     add 2c13513  YARN-8549 Adding a NoOp timeline writer and reader plugin classes for ATSv2. Contributed by Prabha Manepalli.
     add 28ad20a  YARN-9262. TestRMAppAttemptTransitions is failing with an NPE. Contributed by lujie.
     add f20b043  YARN-9263. TestConfigurationNodeAttributesProvider fails after Mockito updated. Contributed by Weiwei Yang.
     add 69bcff3  YARN-9231. TestDistributedShell fix timeout. Contributed by Prabhu Joseph.
     add b6f90d3  HDDS-1021. Allow ozone datanode to participate in a 1 node as well as 3 node Ratis Pipeline. Contributed by Shashikant Banerjee.
     add ec77e95  HDFS-14232. libhdfs is not included in binary tarball. Contributed by Akira Ajisaka.
     add 9aa3dc8  HDFS-14158. Checkpointer ignores configured time period > 5 minutes
     add c991e2c  MAPREDUCE-7177. Disable speculative execution in TestDFSIO. Contributed by Zhaohui Xin.
     add 0f9aa5b  HADOOP-16089. AliyunOSS: update oss-sdk version to 3.4.1. Contributed by wujinhu.
     add 604b248  YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.
     add 758e9ce  HADOOP-16076. SPNEGO+SSL Client Connections with HttpClient Broken.
     add 0e79a86  HDFS-14202. dfs.disk.balancer.max.disk.throughputInMBperSec property is not working as per set value. Contributed by Ranith Sardar.
     add 5f15a60  HDFS-14125. Use parameterized log format in ECTopologyVerifier. Contributed by Kitti Nanasi.
     add 9a19d6d  HDDS-1039. OzoneManager fails to connect with secure SCM. Contributed by Ajay Kumar
     add 529791c  HADOOP-15938. [JDK 11] Remove animal-sniffer-maven-plugin to fix build. Contributed by Dinesh Chitlangia.
     add 3efa168  HDDS-1029. Allow option for force in DeleteContainerCommand. Contributed by Bharat Viswanadham.
     add 5718389  YARN-9149. yarn container -status misses logUrl when integrated with ATSv2. Contributed by Abhishek Modi.
     add aa7ce50  YARN-9275. Add link to NodeAttributes doc in PlacementConstraints document. Contributed by Masatake Iwasaki.
     add ba38db4  YARN-9257. Distributed Shell client throws a NPE for a non-existent queue. Contributed by Charan Hebri.
     add e3ec18b  YARN-6735. Have a way to turn off container metrics from NMs. Contributed by Abhishek Modi.
     add f365957  HADOOP-15229. Add FileSystem builder-based openFile() API to match createFile(); S3A to implement S3 Select through this API.
     add 9ace37b  HDDS-987. MultipartUpload: S3API for list parts of a object. Contributed by Bharat Viswanadham.
     add ba9efe0  HADOOP-16074. WASB: Update container not found error code.
     add 9f2da01  HDDS-776. Make OM initialization resilient to dns failures. Contributed by Doroszlai, Attila.
     add 194f0b4  HDDS-631. Ozone classpath shell command is not working. Contributed by Elek, Marton.
     add 2044967  YARN-9246 NPE when executing a command yarn node -status or -states without additional arguments. Contributed by Masahiro Tanaka
     add fa8cd1b  HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login user using OAuth.
     add 308f316  Make upstream aware of 3.1.2 release
     add 49ddd8a  HDFS-14231. DataXceiver#run() should not log exceptions caused by InvalidToken exception as an error. Contributed by Kitti Nanasi.
     add 911790c  HDDS-1027. Add blockade Tests for datanode isolation and scm failures. Contributed by Nilotpal Nandi.
     add 711d22f  YARN-9253. Add UT to verify Placement Constraint in Distributed Shell. Contributed by Prabhu Joseph.
     add d3de8e1  HDFS-14250. [SBN read]. msync should always direct to active NameNode to get latest stateID. Contributed by Chao Sun.
     add 6aa6345  HDFS-14242. OIV WebImageViewer: NPE when param op is not specified. Contributed by Siyao Meng.
     add 912d9f7  HDDS-1044. Client doesn't propogate correct error code to client on out of disk space. Contributed by Yiqun Lin.
     add 1e5e08d  YARN-7627. [ATSv2] When passing a non-number as metricslimit, the error message is wrong. Contributed by Charan Hebri.
     add 7fa62e1  YARN-8219. Add application launch time to ATSV2. Contributed by Abhishek Modi.
     add 3c96a03  YARN-8498. Yarn NodeManager OOM Listener Fails Compilation on Ubuntu 18.04. Contributed by Ayush Saxena.
     add de804e5  HADOOP-15281. Distcp to add no-rename copy option.
     add 214112b  HDDS-1010. ContainerSet#getContainerMap should be renamed. Contributed by Supratim Deka.
     add a65aca2  HDDS-922. Create isolated classloder to use ozonefs with any older hadoop versions. Contributed by Elek, Marton.
     add d1ca943  YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein
     add 75e8441  HDDS-1071. Make Ozone s3 acceptance test suite centos compatible. Contributed by Elek Marton.
     add 546c5d7  HADOOP-16032. Distcp It should clear sub directory ACL before applying new ACL on.
     add 668817a  Revert "HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login user using OAuth."
     add 1f16550  HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login user using OAuth.
     add 4be8735  HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. Contributed by Surendra Singh Lilhore.
     add a140a89  HDDS-1069. Temporarily disable the security acceptance tests by default in Ozone. Contributed by Marton Elek.
     add 0c1bc4d  HDDS-981. Block allocation should involve pipeline selection and then container selection. Contributed by Lokesh Jain.
     add df7b7da  HDDS-1073. Fix FindBugs issues on OzoneBucketStub#createMultipartKey. Contributed by Aravindan Vijayan.
     add 394a9f7  HDDS-1033. Add FSStatistics for OzoneFileSystem. Contributed by Mukul Kumar Singh.
     add 1771317  HDFS-14172. Avoid NPE when SectionName#fromString returns null. Contributed by Xiang Li.
     add e0ab1bd  YARN-9282. Typo in javadoc of class LinuxContainerExecutor: hadoop.security.authetication should be 'authentication'. Contributed by Charan Hebri.
     add fb8c997  HDDS-1048. Remove SCMNodeStat from SCMNodeManager and use storage information from DatanodeInfo#StorageReportProto. Contributed by Nanda kumar.
     add e50dc7e  HDDS-1018. Update the base image of krb5 container for the secure ozone cluster. Contributed by Xiaoyu Yao.
     add 965d26c  HDDS-1026. Reads should fail over to alternate replica. Contributed by Shashikant Banerjee.
     add ed99da8  HDDS-1078. TestRatisPipelineProvider failing because of node count mismatch. Contributed by Mukul Kumar Singh.
     add a141458  HDDS-1077. TestSecureOzoneCluster does not config OM HTTP keytab. Contributed by Xiaoyu Yao.
     add 2b7f828  YARN-9252. Allocation Tag Namespace support in Distributed Shell. Contributed by Prabhu Joseph.
     add 0a1637c  YARN-8555. Parameterize TestSchedulingRequestContainerAllocation(Async) to cover both PC handler options. Contributed by Prabhu Joseph.
     add e7d1ae5  HDDS-1017. Use distributed tracing to indentify performance problems in Ozone. Contributed by Elek, Marton.
     add 73b67b2  HDDS-1040. Add blockade Tests for client failures. Contributed by Nilotpal Nandi.
     add 0ceb1b7  HDFS-14260. Replace synchronized method in BlockReceiver with atomic value. Contributed by BELUGA BEHR.
     add 5c10630  HDFS-14261. Kerberize JournalNodeSyncer unit test. Contributed by Siyao Meng.
     add ca4e46a  HDDS-1075. Fix CertificateUtil#parseRSAPublicKey charsetName. Contributed by Siddharth Wagle.
     add 6c999fe  HADOOP-16098. Fix javadoc warnings in hadoop-aws. Contributed by Masatake Iwasaki.
     add 1ce2e91  YARN-9229. Document docker registry deployment with NFS Gateway. Contributed by Eric Yang.
     add 7536488  YARN-996. REST API support for node resource configuration. Contributed by Inigo Goiri.
     add d48e61d  HDDS-1012. Add Default CertificateClient implementation. Contributed by Ajay Kumar
     add 26e6013  HDDS-1074. Remove dead variable from KeyOutputStream#addKeyLocationInfo. Contributed by Siddharth Wagle.
     add a536eb5  HDDS-360. Use RocksDBStore and TableStore for SCM Metadata. Contributed by Anu Engineer.
     add 4f7d32e  HDDS-1081. CLOSING state containers should not be added to pipeline on SCM start. Contributed by Lokesh Jain.
     add 63a9b0d  HDDS-1080. Ozonefs Isolated class loader should support FsStorageStatistics. Contributed by Elek, Marton.
     add 20b92cd  HDDS-1050. TestSCMRestart#testPipelineWithScmRestart is failing. Contributed by Supratim Deka.
     add 7806403  HDFS-14266. EC : Fsck -blockId shows null for EC Blocks if One Block Is Not Available. Contributed by Ayush Saxena.
     add 3dc2523  YARN-9184. Add a system flag to allow update to latest docker images.            Contributed by Zhaohui Xin
     add 06d7890  HDDS-1047. Fix TestRatisPipelineProvider#testCreatePipelineWithFactor. Contributed by Nilotpal Nandi.
     add 7b11b40  HADOOP-16097. Provide proper documentation for FairCallQueue. Contributed by Erik Krogen.
     add 917ac9f  HDDS-972. Add support for configuring multiple OMs. Contributed by Hanisha Koneru.
     add cf4aecc  HDDS-1034. TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure. Contributed by Mukul Kumar Singh.
     add 00c5ffa  HADOOP-16108. Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's. Contributed by Ayush Saxena.
     add 35d4f32  HDFS-14274. EC: NPE While Listing EC Policy For A Directory Following Replication Policy. Contributed by Ayush Saxena.
     add 29b411d  HDFS-14263. Remove unnecessary block file exists check from FsDatasetImpl#getBlockInputStream(). Contributed by Surendra Singh Lilhore
     add 024c872  HDFS-13617. Allow wrapping NN QOP into token in encrypted message. Contributed by Chen Liang
     add fa067aa  HDDS-936. Need a tool to map containers to ozone objects. Contributed by Sarun Singla
     add fd02686  HDFS-14241. Provide feedback on successful renameSnapshot and deleteSnapshot. Contributed by Siyao Meng.
     add dfe0f42  YARN-7824. [UI2] Yarn Component Instance page should include link to container logs. Contributed by Akhil PB.
     add 7a57974  HDDS-1096. OzoneManager#loadOMHAConfigs should use default ports in case port is not defined. Contributed by Hanisha Koneru.
     add 080a421  HDFS-14262. [SBN read] Make Log.WARN message in GlobalStateIdContext more informative. Contributed by Shweta Yakkali.
     add 0d7a5ac  HDFS-13209. DistributedFileSystem.create should allow an option to provide StoragePolicy. Contributed by Ayush Saxena.
     add 134ae8f  YARN-9293. Optimize MockAMLauncher event handling. Contributed by Bibin A Chundatt.
     add b66d5ae  YARN-9295. [UI2] Fix label typo in Cluster Overview page. Contributed by Charan Hebri.
     add 64f28f9  HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new ProxyCombiner allowing for multiple related protocols to be combined. Allow AlignmentContext to be passed in NameNodeProxyFactory. Contributed by Erik Krogen.
     add 6c8ffdb  HDDS-1100. fix asf license errors in newly added files by HDDS-936. Contributed by  Dinesh Chitlangia.
     add 2d83b24  HDDS-1108. Check s3bucket exists or not before MPU operations. Contributed by Bharat Viswanadham.
     add dabfeab  YARN-9308. fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240. Contributed by Wilfred Spiegelenburg.
     add 5656409  HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham.
     add 492e49e  Revert "HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham."
     add 084b6a6  HDDS-1099. Genesis benchmark for ozone key creation in OM. Contributed by Bharat Viswanadham.
     add 3a39d9a  YARN-9284. Fix the unit of yarn.service.am-resource.memory in the document. Contributed by Masahiro Tanaka.
     add 0395f22  HDDS-1068. Improve the error propagation for ozone sh. Contributed by Elek, Marton.
     add 5b55f35  YARN-8295. [UI2] Improve Resource Usage tab error message when there are no data available. Contributed by Charan Hebri.
     add 506bd02  HDDS-905. Create informative landing page for Ozone S3 gateway. Contributed by Elek, Marton.
     add 5cb67cf  HDDS-1097. Add genesis benchmark for BlockManager#allocateBlock. Contributed by Lokesh Jain.
     add 75e15cc  HDDS-1103.Fix rat/findbug/checkstyle errors in ozone/hdds projects. Contributed by Elek, Marton.
     add 8a426dc  HDDS-1028. Improve logging in SCMPipelineManager. Contributed by Lokesh Jain.
     add 9385ec4  YARN-9283. Javadoc of LinuxContainerExecutor#addSchedPriorityCommand has a wrong property name as reference
     add 9584b47  HDDS-1082. OutOfMemoryError because of memory leak in KeyInputStream. Contributed by Supratim Deka.
     add e0fe3d1  HDDS-1110. OzoneManager need to login during init when security is enabled. Contributed by Xiaoyu Yao.
     add de934ba  HDDS-1076. TestSCMNodeManager crashed the jvm. Contributed by Lokesh Jain.
     add 7c1b561  YARN-8927. Added support for top level Dockerhub images to trusted registry using library keyword.            Contributed by Zhankun Tang
     add d10444e  HDDS-1092. Use Java 11 JRE to run Ozone in containers.
     add 217bdbd  HDDS-1116.Add java profiler servlet to the Ozone web servers. Contributed by Elek, Marton.
     add afe126d  HDDS-1114. Fix findbugs/checkstyle/accepteance errors in Ozone. Contributed by Marton Elek.
     add dde0ab5  HDFS-14258. Introduce Java Concurrent Package To DataXceiverServer Class. Contributed by BELUGA BEHR.
     add 7ea9149  HDDS-1041. Support TDE(Transparent Data Encryption) for Ozone. Contributed by Xiaoyu Yao.
     add 9057aa9  SUBMARINE-1. Move code base of submarine from yarn-applications to top directory. Contributed by Wangda Tan.
     add ba56bc2  YARN-9213. RM Web UI v1 does not show custom resource allocations for containers page. Contributed by Szilard Nemeth.
     add 0f2b65c  HADOOP-16116. Fix Spelling Mistakes - DECOMISSIONED. Contributed by BELUGA BEHR.
     add db4d1a1  YARN-9060. [YARN-8851] Phase 1 - Support device isolation and use the Nvidia GPU plugin as an example. Contributed by Zhankun Tang.
     add f2fb653  HDDS-1106. Introduce queryMap in PipelineManager. Contributed by Lokesh Jain.
     add 920a896  Revert "HADOOP-15843. s3guard bucket-info command to not print a stack trace on bucket-not-found."
     add 235e3da  HDFS-14287. DataXceiverServer May Double-Close PeerServer. Contributed by BELUGA BEHR.
     add 1de25d1  HDFS-9596. Remove Shuffle Method From DFSUtil. Contributed by BELUGA BEHR.
     add 7587f97  HDFS-14296. Prefer ArrayList over LinkedList in VolumeScanner. Contributed by BELUGA BEHR.
     add 67af509  HDDS-1122. Fix TestOzoneManagerRatisServer#testSubmitRatisRequest unit test failure. Contributed by Yiqun Lin.
     add 588b4c4  HDDS-1085. Create an OM API to serve snapshots to Recon server. Contributed by Aravindan Vijayan.
     add 1e0ae6e  HADOOP-15843. s3guard bucket-info command to not print a stack trace on bucket-not-found.
     add cf1a66d  HDDS-1101. SCM CA: Write Certificate information to SCM Metadata. Contributed by Anu Engineer.
     add 779dae4  YARN-9309. Improve graph text in SLS to avoid overlapping. Contributed by Bilwa S T.
     add 02d04bd  HDDS-1121. Key read failure when data is written parallel in to Ozone. Contributed by Bharat Viswanadham.
     add b8de78c  YARN-9286. [Timeline Server] Sorting based on FinalStatus shows pop-up message. Contributed by Bilwa S T.
     add 14282e3  HDFS-14188. Make hdfs ec -verifyClusterSetup command accept an erasure coding policy as a parameter. Contributed by Kitti Nanasi.
     add 0525d85  HADOOP-15967. KMS Benchmark Tool. Contributed by George Huang.
     add e8d7e3b  HDDS-1139 : Fix findbugs issues caused by HDDS-1085. Contributed by Aravindan Vijayan.
     add 51950f1  Logging stale datanode information. Contributed by  Karthik Palanisamy.
     add 1d30fd9  HDDS-1130. Make BenchMarkBlockManager multi-threaded. Contributed by Lokesh Jain.
     add 642fe6a  HDDS-1135. Ozone jars are missing in the Ozone Snapshot tar. Contributed by Dinesh Chitlangia.
     add 41e18fe  HDFS-14235. Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.
     add aa3ad36  HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace enabled. Contributed by Masatake Iwasaki.
     add 1374f8f  HDDS-1060. Add API to get OM certificate from SCM CA. Contributed by Ajay Kumar.
     add a30059b  HDFS-14267. Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples. Contributed by Sahil Takiar.
     add 676a9cb  HDDS-1053. Generate RaftGroupId from OMServiceID. Contributed by Aravindan Vijayan.
     add f5b4e0f  HDFS-14302. Refactor NameNodeWebHdfsMethods#generateDelegationToken() to allow better extensibility. Contributed by CR Hota.
     new 8bc2ad2  HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena.
     new dca3b2e  HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.
     new f61a816  HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng.
     new dde38f7  HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota.
     new 0367198  HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri.
     new 30573af  HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.
     new 9f362fa  HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng.
     new 8fe8161  HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota.
     new 0b67a7d  HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.
     new 53b69da  HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar.
     new 3f12355  HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar.
     new 16b8f75  HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena.
     new 0ffeac3  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
     new f659c27  Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."
     new f945456  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
     new 5640958  HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena.
     new ce9351a  HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.
     new 640fe07  HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable.
     new c49a422  HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.
     new 3b971fe  HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri.
     new 6e770ff  HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui.
     new c74f7e1  HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.
     new c30d4d9  HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar.
     new a73cfff  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.
     new b6b8d14  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.
     new b240f39  HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar.
     new d747fb1  HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.
     new 6c9c040  HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena.
     new 11210e7  HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan.
     new 30a5fba  HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.
     new f40da42  HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.
     new 2f92825  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.
     new 71f2066  HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena.
     new ec52346  HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
     new bc8317f  HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar.
     new d815bc7  HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena.
     new b28580a  HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui.
     new 5f5ba94  HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.
     new 1645df9  HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena.
     new 22d23de  HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri.
     new f476bb1  HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (215e525)
            \
             N -- N -- N   refs/heads/HDFS-13891 (f476bb1)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 41 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 BUILDING.txt                                       |     3 -
 LICENSE.txt                                        |     2 +-
 .../hadoop-client-minicluster/pom.xml              |     4 +-
 hadoop-common-project/hadoop-auth/pom.xml          |     2 +-
 .../server/KerberosAuthenticationHandler.java      |    11 +-
 .../authentication/util/CertificateUtil.java       |     5 +-
 .../security/authentication/util/KerberosName.java |    87 +-
 .../security/authentication/util/KerberosUtil.java |     2 +-
 .../client/TestKerberosAuthenticator.java          |     1 +
 .../server/TestAuthenticationFilter.java           |    19 +-
 .../server/TestKerberosAuthenticationHandler.java  |    59 +-
 .../authentication/util/TestKerberosName.java      |    12 +
 .../util/TestRandomSignerSecretProvider.java       |     4 +-
 .../jdiff/Apache_Hadoop_Common_3.1.2.xml           | 35695 +++++++++++++++++++
 hadoop-common-project/hadoop-common/pom.xml        |     2 +-
 .../hadoop-common/src/main/conf/log4j.properties   |    11 -
 .../java/org/apache/hadoop/conf/Configuration.java |     4 +-
 .../java/org/apache/hadoop/crypto/CipherSuite.java |     4 +-
 .../org/apache/hadoop/crypto/key/KeyProvider.java  |     3 +-
 .../hadoop/crypto/key/kms/KMSClientProvider.java   |    13 +-
 .../org/apache/hadoop/fs/AbstractFileSystem.java   |    40 +-
 .../java/org/apache/hadoop/fs/BlockLocation.java   |    13 +-
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |     7 +
 .../org/apache/hadoop/fs/DelegateToFileSystem.java |    20 +
 .../main/java/org/apache/hadoop/fs/FSBuilder.java  |   131 +
 .../org/apache/hadoop/fs/FSDataOutputStream.java   |     4 +-
 .../hadoop/fs/FSDataOutputStreamBuilder.java       |   193 +-
 .../java/org/apache/hadoop/fs/FileContext.java     |    69 +-
 .../org/apache/hadoop/fs/FileEncryptionInfo.java   |    32 +-
 .../main/java/org/apache/hadoop/fs/FileStatus.java |    34 +-
 .../main/java/org/apache/hadoop/fs/FileSystem.java |   188 +-
 .../main/java/org/apache/hadoop/fs/FileUtil.java   |    30 +-
 .../org/apache/hadoop/fs/FilterFileSystem.java     |    33 +
 .../main/java/org/apache/hadoop/fs/FilterFs.java   |    13 +
 .../hadoop/fs/FutureDataInputStreamBuilder.java    |    50 +
 .../src/main/java/org/apache/hadoop/fs/Path.java   |    12 +-
 .../org/apache/hadoop/fs/StorageStatistics.java    |    10 +-
 .../hadoop/fs/impl/AbstractFSBuilderImpl.java      |   356 +
 .../fs/impl/FutureDataInputStreamBuilderImpl.java  |   141 +
 .../org/apache/hadoop/fs/impl/FutureIOSupport.java |   191 +
 .../apache/hadoop/fs/impl/WrappedIOException.java  |    56 +
 .../org/apache/hadoop/fs/impl/package-info.java    |    49 +
 .../org/apache/hadoop/fs/permission/AclEntry.java  |     4 +-
 .../hadoop/fs/shell/CommandWithDestination.java    |    14 +-
 .../java/org/apache/hadoop/fs/shell/Count.java     |     4 +-
 .../main/java/org/apache/hadoop/fs/shell/Ls.java   |     8 +-
 .../java/org/apache/hadoop/fs/shell/PathData.java  |     6 +-
 .../apache/hadoop/fs/shell/SnapshotCommands.java   |     3 +
 .../main/java/org/apache/hadoop/fs/shell/Tail.java |    25 +-
 .../hadoop/fs/shell/find/BaseExpression.java       |     4 +-
 .../java/org/apache/hadoop/fs/shell/find/Find.java |     4 +-
 .../java/org/apache/hadoop/http/HttpServer2.java   |     1 -
 .../main/java/org/apache/hadoop/io/MD5Hash.java    |     4 +-
 .../java/org/apache/hadoop/io/SequenceFile.java    |     4 +-
 .../io/compress/CompressionCodecFactory.java       |    18 +-
 .../hadoop/io/compress/PassthroughCodec.java       |   246 +
 .../org/apache/hadoop/io/erasurecode/ECSchema.java |     8 +-
 .../org/apache/hadoop/ipc/DecayRpcScheduler.java   |     2 +-
 .../java/org/apache/hadoop/ipc/FairCallQueue.java  |    32 +-
 .../java/org/apache/hadoop/ipc/ProxyCombiner.java  |   137 +
 .../main/java/org/apache/hadoop/ipc/Server.java    |    11 +-
 .../org/apache/hadoop/ipc/WritableRpcEngine.java   |    12 +-
 .../hadoop/ipc/metrics/RpcDetailedMetrics.java     |     2 +-
 .../org/apache/hadoop/log/LogThrottlingHelper.java |     2 +-
 .../apache/hadoop/metrics2/sink/GraphiteSink.java  |     8 +-
 .../apache/hadoop/metrics2/sink/StatsDSink.java    |     6 +-
 .../hadoop/net/AbstractDNSToSwitchMapping.java     |     4 +-
 .../main/java/org/apache/hadoop/net/NetUtils.java  |     4 +-
 .../org/apache/hadoop/net/NetworkTopology.java     |    25 +-
 .../apache/hadoop/security/HadoopKerberosName.java |    11 +-
 .../java/org/apache/hadoop/security/KDiag.java     |    45 +-
 .../org/apache/hadoop/security/ProviderUtils.java  |     4 +-
 .../hadoop/security/alias/CredentialProvider.java  |     6 +-
 .../hadoop/security/alias/CredentialShell.java     |    12 +-
 .../security/authorize/AccessControlList.java      |     6 +-
 .../hadoop/security/ssl/SSLHostnameVerifier.java   |     6 +-
 .../org/apache/hadoop/security/token/Token.java    |    33 +-
 .../delegation/ZKDelegationTokenSecretManager.java |     2 +-
 .../web/DelegationTokenAuthenticator.java          |     3 +-
 .../service/launcher/InterruptEscalator.java       |     6 +-
 .../org/apache/hadoop/tools/GetGroupsBase.java     |     4 +-
 .../util/BlockingThreadPoolExecutorService.java    |     6 +-
 .../org/apache/hadoop/util/CpuTimeTracker.java     |    12 +-
 .../java/org/apache/hadoop/util/LambdaUtils.java   |    59 +
 .../hadoop/util/SemaphoredDelegatingExecutor.java  |     8 +-
 .../main/java/org/apache/hadoop/util/Shell.java    |    14 +-
 .../java/org/apache/hadoop/util/SignalLogger.java  |     4 +-
 .../hadoop/util/bloom/DynamicBloomFilter.java      |     4 +-
 .../apache/hadoop/io/erasurecode/jni_xor_decoder.c |     2 +
 .../hadoop-common/src/main/proto/Security.proto    |     1 +
 .../src/main/resources/core-default.xml            |   181 +-
 .../src/site/markdown/FairCallQueue.md             |   150 +
 .../hadoop-common/src/site/markdown/Metrics.md     |    10 +
 .../hadoop-common/src/site/markdown/SecureMode.md  |    29 +-
 .../src/site/markdown/filesystem/filesystem.md     |    87 +-
 .../site/markdown/filesystem/fsdatainputstream.md  |    14 +
 .../filesystem/fsdatainputstreambuilder.md         |   112 +
 .../filesystem/fsdataoutputstreambuilder.md        |     6 +-
 .../site/markdown/release/3.1.2/CHANGELOG.3.1.2.md |   158 -
 .../site/markdown/release/3.1.2/CHANGES.3.1.2.md   |   382 +
 .../markdown/release/3.1.2/RELEASENOTES.3.1.2.md   |    60 +
 .../site/markdown/release/3.2.0/CHANGELOG.3.2.0.md |   315 +-
 .../markdown/release/3.2.0/RELEASENOTES.3.2.0.md   |   100 +-
 .../resources/images/faircallqueue-overview.png    |   Bin 0 -> 47397 bytes
 .../apache/hadoop/conf/TestReconfiguration.java    |     6 +-
 .../kms/TestLoadBalancingKMSClientProvider.java    |    45 +
 .../fs/FileContextMainOperationsBaseTest.java      |    95 +-
 .../src/test/java/org/apache/hadoop/fs/TestDU.java |     3 +
 .../org/apache/hadoop/fs/TestFileSystemTokens.java |     1 -
 .../org/apache/hadoop/fs/TestFilterFileSystem.java |     1 -
 .../java/org/apache/hadoop/fs/TestFsShell.java     |     2 +-
 .../org/apache/hadoop/fs/TestHarFileSystem.java    |    20 +
 .../org/apache/hadoop/fs/TestLocalFileSystem.java  |     2 +-
 .../AbstractContractGetFileStatusTest.java         |     4 +-
 .../fs/contract/AbstractContractOpenTest.java      |   135 +-
 .../contract/AbstractContractPathHandleTest.java   |    61 +
 .../hadoop/fs/contract/ContractTestUtils.java      |    31 +
 .../java/org/apache/hadoop/fs/shell/TestCopy.java  |     4 +-
 .../java/org/apache/hadoop/fs/shell/TestLs.java    |     2 +-
 .../java/org/apache/hadoop/fs/shell/TestMove.java  |     2 +-
 .../java/org/apache/hadoop/fs/shell/TestTail.java  |    57 +
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java    |    13 +-
 .../apache/hadoop/ha/TestActiveStandbyElector.java |    23 +-
 .../apache/hadoop/http/TestIsActiveServlet.java    |     6 +-
 .../hadoop/http/lib/TestStaticUserWebFilter.java   |     3 +-
 .../org/apache/hadoop/io/retry/TestRetryProxy.java |     6 +-
 .../org/apache/hadoop/ipc/TestFairCallQueue.java   |    42 +-
 .../test/java/org/apache/hadoop/ipc/TestIPC.java   |    28 +-
 .../test/java/org/apache/hadoop/ipc/TestRPC.java   |     6 +-
 .../java/org/apache/hadoop/ipc/TestServer.java     |     5 +-
 .../hadoop/metrics2/impl/TestGraphiteMetrics.java  |     4 +-
 .../metrics2/impl/TestMetricsSystemImpl.java       |     2 +-
 .../hadoop/metrics2/impl/TestMetricsVisitor.java   |     2 +-
 .../hadoop/metrics2/lib/TestMutableMetrics.java    |     4 +-
 .../metrics2/lib/TestMutableRollingAverages.java   |     4 +-
 .../org/apache/hadoop/net/TestClusterTopology.java |    35 +
 .../hadoop/security/TestAuthenticationFilter.java  |    10 +-
 .../java/org/apache/hadoop/security/TestKDiag.java |    16 +
 .../hadoop/security/TestLdapGroupsMapping.java     |     4 +-
 .../TestLdapGroupsMappingWithFailover.java         |     4 +-
 .../TestLdapGroupsMappingWithOneQuery.java         |     6 +-
 .../TestLdapGroupsMappingWithPosixGroup.java       |     2 +-
 .../security/TestRuleBasedLdapGroupsMapping.java   |     4 +-
 .../hadoop/security/TestUserGroupInformation.java  |    36 +-
 .../security/http/TestXFrameOptionsFilter.java     |    16 +-
 .../apache/hadoop/security/ssl/TestSSLFactory.java |     3 +-
 .../hadoop/service/TestServiceOperations.java      |     2 +-
 .../org/apache/hadoop/test/LambdaTestUtils.java    |   221 +
 .../org/apache/hadoop/test/MetricsAsserts.java     |    21 +-
 .../apache/hadoop/test/TestLambdaTestUtils.java    |   114 +-
 .../java/org/apache/hadoop/util/TestRunJar.java    |     2 +-
 .../hadoop-common/src/test/resources/testConf.xml  |     6 +-
 hadoop-common-project/hadoop-kms/pom.xml           |     2 +-
 .../hadoop/crypto/key/kms/server/KMSBenchmark.java |   627 +
 .../hadoop/crypto/key/kms/server/TestKMS.java      |    64 +
 hadoop-common-project/hadoop-nfs/pom.xml           |     2 +-
 .../hadoop/registry/secure/TestSecureLogins.java   |    13 +-
 .../hdds/scm/ClientCredentialInterceptor.java      |    65 +
 .../apache/hadoop/hdds/scm/XceiverClientGrpc.java  |   126 +-
 .../hadoop/hdds/scm/XceiverClientManager.java      |    27 +-
 .../apache/hadoop/hdds/scm/XceiverClientRatis.java |    55 +-
 .../hdds/scm/client/ContainerOperationClient.java  |    20 +-
 .../hadoop/hdds/scm/client/HddsClientUtils.java    |    38 +
 .../hadoop/hdds/scm/storage/BlockInputStream.java  |   407 +
 .../hadoop/hdds/scm/storage/BlockOutputStream.java |    36 +-
 .../hadoop/hdds/scm/storage/ChunkInputStream.java  |   290 -
 hadoop-hdds/common/pom.xml                         |    32 +-
 .../org/apache/hadoop/hdds/HddsConfigKeys.java     |   133 +-
 .../java/org/apache/hadoop/hdds/HddsUtils.java     |    99 +-
 .../org/apache/hadoop/hdds/cli/GenericCli.java     |     7 +-
 .../hadoop/hdds/cli/HddsVersionProvider.java       |     2 +-
 .../org/apache/hadoop/hdds/client/BlockID.java     |     8 +-
 .../hadoop/hdds/client/ContainerBlockID.java       |    10 +-
 .../apache/hadoop/hdds/conf/HddsConfServlet.java   |     3 +
 .../hadoop/hdds/conf/OzoneConfiguration.java       |     2 +
 .../hadoop/hdds/protocol/SCMSecurityProtocol.java  |    74 +
 .../SCMSecurityProtocolClientSideTranslatorPB.java |   164 +
 .../hdds/protocolPB/SCMSecurityProtocolPB.java     |    35 +
 .../SCMSecurityProtocolServerSideTranslatorPB.java |   129 +
 .../hadoop/hdds/protocolPB/package-info.java       |    22 +
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java  |    39 +-
 .../hadoop/hdds/scm/XceiverClientAsyncReply.java   |    56 -
 .../apache/hadoop/hdds/scm/XceiverClientReply.java |    73 +
 .../apache/hadoop/hdds/scm/XceiverClientSpi.java   |    33 +-
 .../hadoop/hdds/scm/container/ContainerInfo.java   |     1 +
 .../apache/hadoop/hdds/scm/pipeline/Pipeline.java  |     5 +
 .../pipeline/UnknownPipelineStateException.java    |     2 +-
 .../scm/protocol/ScmBlockLocationProtocol.java     |     3 +
 .../protocol/StorageContainerLocationProtocol.java |     3 +
 ...lockLocationProtocolClientSideTranslatorPB.java |    65 +-
 .../scm/protocolPB/ScmBlockLocationProtocolPB.java |     6 +-
 ...inerLocationProtocolClientSideTranslatorPB.java |    20 +-
 .../StorageContainerLocationProtocolPB.java        |     4 +
 .../hdds/scm/storage/ContainerProtocolCalls.java   |   181 +-
 .../security/exception/SCMSecurityException.java   |    79 +
 .../hdds/security/exception/package-info.java      |    23 +
 .../hdds/security/token/BlockTokenException.java   |    53 +
 .../hdds/security/token/BlockTokenVerifier.java    |   131 +
 .../security/token/OzoneBlockTokenIdentifier.java  |   199 +
 .../security/token/OzoneBlockTokenSelector.java    |    75 +
 .../hadoop/hdds/security/token/TokenVerifier.java  |    38 +
 .../hadoop/hdds/security/token/package-info.java   |    22 +
 .../hadoop/hdds/security/x509/SecurityConfig.java  |   462 +
 .../x509/certificate/authority/BaseApprover.java   |   249 +
 .../certificate/authority/CertificateApprover.java |    86 +
 .../certificate/authority/CertificateServer.java   |   123 +
 .../certificate/authority/CertificateStore.java    |    80 +
 .../certificate/authority/DefaultApprover.java     |   127 +
 .../certificate/authority/DefaultCAServer.java     |   475 +
 .../authority/PKIProfiles/DefaultCAProfile.java    |    46 +
 .../authority/PKIProfiles/DefaultProfile.java      |   333 +
 .../authority/PKIProfiles/PKIProfile.java          |   140 +
 .../authority/PKIProfiles/package-info.java        |    33 +
 .../x509/certificate/authority/package-info.java   |    22 +
 .../x509/certificate/client/CertificateClient.java |   173 +
 .../certificate/client/DNCertificateClient.java    |    40 +
 .../client/DefaultCertificateClient.java           |   632 +
 .../certificate/client/OMCertificateClient.java    |   102 +
 .../x509/certificate/client/package-info.java      |    22 +
 .../x509/certificate/utils/CertificateCodec.java   |   296 +
 .../x509/certificate/utils/package-info.java       |    22 +
 .../certificates/utils/CertificateSignRequest.java |   276 +
 .../certificates/utils/SelfSignedCertificate.java  |   238 +
 .../x509/certificates/utils/package-info.java      |    22 +
 .../x509/exceptions/CertificateException.java      |    87 +
 .../security/x509/exceptions/package-info.java     |    23 +
 .../hdds/security/x509/keys/HDDSKeyGenerator.java  |   118 +
 .../hadoop/hdds/security/x509/keys/KeyCodec.java   |   411 +
 .../hdds/security/x509/keys/SecurityUtil.java      |   138 +
 .../hdds/security/x509/keys/package-info.java      |    23 +
 .../hadoop/hdds/security/x509/package-info.java    |    99 +
 .../hadoop/hdds/tracing/GrpcClientInterceptor.java |    57 +
 .../hadoop/hdds/tracing/GrpcServerInterceptor.java |    51 +
 .../apache/hadoop/hdds/tracing/StringCodec.java    |    89 +
 .../apache/hadoop/hdds/tracing/TraceAllMethod.java |    86 +
 .../apache/hadoop/hdds/tracing/TracingUtil.java    |   112 +
 .../apache/hadoop/hdds/tracing/package-info.java   |    23 +
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java   |    14 +-
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |    31 +
 .../org/apache/hadoop/ozone/OzoneSecurityUtil.java |    60 +
 .../org/apache/hadoop/ozone/common/Checksum.java   |     2 +
 .../apache/hadoop/ozone/common/StorageInfo.java    |     2 +-
 ...inerLocationProtocolServerSideTranslatorPB.java |    41 +-
 .../org/apache/hadoop/utils/HddsVersionInfo.java   |   158 +-
 .../apache/hadoop/utils/MetadataStoreBuilder.java  |    28 +-
 .../org/apache/hadoop/utils/RetriableTask.java     |    78 +
 .../java/org/apache/hadoop/utils/Scheduler.java    |   101 +
 .../java/org/apache/hadoop/utils/VersionInfo.java  |    97 +
 .../java/org/apache/hadoop/utils/db/Codec.java     |     6 +-
 .../org/apache/hadoop/utils/db/CodecRegistry.java  |     5 +-
 .../hadoop/utils/db/DBCheckpointSnapshot.java      |    53 +
 .../java/org/apache/hadoop/utils/db/DBStore.java   |     7 +
 .../hadoop/utils/db/RDBCheckpointManager.java      |   130 +
 .../java/org/apache/hadoop/utils/db/RDBStore.java  |    36 +
 .../apache/hadoop/utils/db/RDBStoreIterator.java   |    21 +-
 .../org/apache/hadoop/utils/db/StringCodec.java    |     5 +-
 .../java/org/apache/hadoop/utils/db/Table.java     |     4 +-
 .../org/apache/hadoop/utils/db/TableIterator.java  |    15 +-
 .../org/apache/hadoop/utils/db/TypedTable.java     |    28 +-
 .../main/java/org/apache/ratis/RatisHelper.java    |    89 +-
 .../src/main/proto/DatanodeContainerProtocol.proto |     2 +-
 .../src/main/proto/SCMSecurityProtocol.proto       |   106 +
 .../proto/StorageContainerLocationProtocol.proto   |    19 +-
 hadoop-hdds/common/src/main/proto/hdds.proto       |    36 +
 .../common/src/main/resources/ozone-default.xml    |   404 +-
 .../token/TestOzoneBlockTokenIdentifier.java       |   313 +
 .../hadoop/hdds/security/token/package-info.java   |    22 +
 .../x509/certificate/authority/MockApprover.java   |    57 +
 .../x509/certificate/authority/MockCAStore.java    |    54 +
 .../certificate/authority/TestDefaultCAServer.java |   171 +
 .../certificate/authority/TestDefaultProfile.java  |   364 +
 .../x509/certificate/authority/package-info.java   |    22 +
 .../client/TestCertificateClientInit.java          |   206 +
 .../client/TestDefaultCertificateClient.java       |   336 +
 .../certificate/utils/TestCertificateCodec.java    |   218 +
 .../x509/certificate/utils/package-info.java       |    23 +
 .../certificates/TestCertificateSignRequest.java   |   285 +
 .../x509/certificates/TestRootCertificate.java     |   258 +
 .../security/x509/certificates/package-info.java   |    22 +
 .../security/x509/keys/TestHDDSKeyGenerator.java   |    87 +
 .../hdds/security/x509/keys/TestKeyCodec.java      |   231 +
 .../hdds/security/x509/keys/package-info.java      |    22 +
 .../hadoop/hdds/security/x509/package-info.java    |    22 +
 .../org/apache/hadoop/utils/TestRetriableTask.java |    76 +
 .../apache/hadoop/utils/db/TestDBStoreBuilder.java |     5 +-
 .../org/apache/hadoop/utils/db/TestRDBStore.java   |    94 +-
 .../apache/hadoop/utils/db/TestRDBTableStore.java  |    17 +-
 .../hadoop/utils/db/TestTypedRDBTableStore.java    |     8 +-
 .../dev-support/findbugsExcludeFile.xml            |    12 +
 hadoop-hdds/container-service/pom.xml              |     6 +
 .../org/apache/hadoop/hdds/scm/HddsServerUtil.java |    23 +
 .../apache/hadoop/ozone/HddsDatanodeService.java   |    35 +
 .../container/common/impl/ContainerDataYaml.java   |    17 +-
 .../ozone/container/common/impl/ContainerSet.java  |     3 +-
 .../container/common/impl/HddsDispatcher.java      |    18 +-
 .../container/common/interfaces/Container.java     |     8 +-
 .../ozone/container/common/interfaces/Handler.java |    12 +
 .../common/statemachine/DatanodeStateMachine.java  |     3 +
 .../common/statemachine/StateContext.java          |    34 +-
 .../CloseContainerCommandHandler.java              |    94 +-
 .../DeleteContainerCommandHandler.java             |    85 +
 .../states/endpoint/HeartbeatEndpointTask.java     |    11 +
 .../states/endpoint/VersionEndpointTask.java       |     3 +-
 .../transport/server/GrpcXceiverService.java       |    16 +
 .../server/ServerCredentialInterceptor.java        |    74 +
 .../common/transport/server/XceiverServer.java     |    87 +
 .../common/transport/server/XceiverServerGrpc.java |    46 +-
 .../server/ratis/ContainerStateMachine.java        |   154 +-
 .../transport/server/ratis/DispatcherContext.java  |    18 +-
 .../transport/server/ratis/XceiverServerRatis.java |   299 +-
 .../container/common/volume/AbstractFuture.java    |  1298 +
 .../ozone/container/common/volume/HddsVolume.java  |    25 +-
 .../container/common/volume/HddsVolumeChecker.java |   421 +
 .../common/volume/ThrottledAsyncChecker.java       |   247 +
 .../container/common/volume/TimeoutFuture.java     |   161 +
 .../ozone/container/common/volume/VolumeInfo.java  |     2 +-
 .../ozone/container/common/volume/VolumeSet.java   |   152 +-
 .../container/keyvalue/KeyValueContainer.java      |    65 +-
 .../ozone/container/keyvalue/KeyValueHandler.java  |   151 +-
 .../helpers/KeyValueContainerLocationUtil.java     |    13 +-
 .../keyvalue/helpers/KeyValueContainerUtil.java    |     3 +-
 .../container/ozoneimpl/ContainerController.java   |    15 +-
 .../ozone/container/ozoneimpl/ContainerReader.java |     4 +-
 .../ozone/container/ozoneimpl/OzoneContainer.java  |     1 +
 .../replication/GrpcReplicationService.java        |     9 +-
 .../replication/SimpleContainerDownloader.java     |     1 -
 .../protocol/StorageContainerDatanodeProtocol.java |     4 +
 .../protocol/commands/CloseContainerCommand.java   |    13 +-
 .../commands/DeleteBlockCommandStatus.java         |    12 +-
 .../protocol/commands/DeleteBlocksCommand.java     |     6 +-
 .../protocol/commands/DeleteContainerCommand.java  |    86 +
 .../commands/ReplicateContainerCommand.java        |     4 -
 .../ozone/protocol/commands/ReregisterCommand.java |    11 +-
 .../hadoop/ozone/protocol/commands/SCMCommand.java |     2 +-
 .../StorageContainerDatanodeProtocolPB.java        |     6 +
 .../proto/StorageContainerDatanodeProtocol.proto   |     1 +
 .../ozone/container/common/SCMTestUtils.java       |    22 +-
 .../common/impl/TestContainerDataYaml.java         |     9 +-
 .../container/common/impl/TestHddsDispatcher.java  |    86 +-
 .../container/common/interfaces/TestHandler.java   |     4 +-
 .../TestCloseContainerCommandHandler.java          |    23 +-
 .../common/volume/TestHddsVolumeChecker.java       |   212 +
 .../common/volume/TestVolumeSetDiskChecks.java     |   182 +
 .../container/keyvalue/TestKeyValueContainer.java  |     4 +-
 .../TestKeyValueContainerMarkUnhealthy.java        |   172 +
 .../TestKeyValueHandlerWithUnhealthyContainer.java |   231 +
 .../src/test/resources/incorrect.container         |     2 +-
 hadoop-hdds/docs/content/AuditParser.md            |    72 +
 hadoop-hdds/docs/content/BucketCommands.md         |    19 +-
 hadoop-hdds/docs/content/Dozone.md                 |    17 +-
 hadoop-hdds/docs/content/JavaApi.md                |    36 +-
 hadoop-hdds/docs/content/KeyCommands.md            |    23 +-
 hadoop-hdds/docs/content/OzoneFS.md                |    21 +-
 hadoop-hdds/docs/content/Rest.md                   |     2 +-
 hadoop-hdds/docs/content/S3.md                     |     6 +-
 hadoop-hdds/docs/content/S3Commands.md             |    41 +
 hadoop-hdds/docs/content/VolumeCommands.md         |     2 +-
 hadoop-hdds/docs/pom.xml                           |     8 +-
 .../apache/hadoop/hdds/server/BaseHttpServer.java  |    12 +
 .../apache/hadoop/hdds/server/ProfileServlet.java  |   476 +
 .../hadoop/hdds/server/ServiceRuntimeInfoImpl.java |    15 +-
 .../hadoop/hdds/server/events/EventWatcher.java    |    12 +-
 hadoop-hdds/pom.xml                                |    19 +-
 hadoop-hdds/server-scm/pom.xml                     |     1 -
 .../java/org/apache/hadoop/hdds/scm/ScmUtils.java  |    37 +
 .../hadoop/hdds/scm/block/BlockManagerImpl.java    |   161 +-
 .../hadoop/hdds/scm/block/DeletedBlockLogImpl.java |   263 +-
 .../hdds/scm/block/PendingDeleteHandler.java       |     3 +
 .../hdds/scm/chillmode/SCMChillModeManager.java    |     7 +
 .../scm/command/CommandStatusReportHandler.java    |    12 +
 .../hdds/scm/container/ContainerManager.java       |    16 +-
 .../hdds/scm/container/ContainerReportHandler.java |    25 +-
 .../hdds/scm/container/ContainerStateManager.java  |   142 +-
 .../container/DeleteContainerCommandWatcher.java   |    56 +
 .../hdds/scm/container/ReportHandlerHelper.java    |     5 +-
 .../hdds/scm/container/SCMContainerManager.java    |    68 +-
 .../container/replication/ReplicationManager.java  |   168 +-
 .../hdds/scm/container/states/ContainerState.java  |    32 +-
 .../apache/hadoop/hdds/scm/events/SCMEvents.java   |    21 +
 .../hadoop/hdds/scm/metadata/BigIntegerCodec.java  |    39 +
 .../metadata/DeletedBlocksTransactionCodec.java    |    49 +
 .../apache/hadoop/hdds/scm/metadata/LongCodec.java |    40 +
 .../hadoop/hdds/scm/metadata/SCMMetadataStore.java |   103 +
 .../hdds/scm/metadata/SCMMetadataStoreRDBImpl.java |   198 +
 .../hdds/scm/metadata/X509CertificateCodec.java    |    54 +
 .../hadoop/hdds/scm/metadata/package-info.java     |    21 +
 .../apache/hadoop/hdds/scm/node/DatanodeInfo.java  |     7 +-
 .../hadoop/hdds/scm/node/DeadNodeHandler.java      |    29 +-
 .../hadoop/hdds/scm/node/NewNodeHandler.java       |    14 +-
 .../apache/hadoop/hdds/scm/node/NodeManager.java   |    15 +-
 .../hadoop/hdds/scm/node/NodeStateManager.java     |    67 +-
 .../scm/node/NonHealthyToHealthyNodeHandler.java   |    48 +
 .../hadoop/hdds/scm/node/SCMNodeManager.java       |   174 +-
 .../hadoop/hdds/scm/node/StaleNodeHandler.java     |     2 +
 .../hdds/scm/node/states/Node2ObjectsMap.java      |     2 +
 .../hdds/scm/node/states/Node2PipelineMap.java     |     8 +-
 .../hadoop/hdds/scm/node/states/NodeStateMap.java  |    58 -
 .../hdds/scm/pipeline/PipelineActionHandler.java   |     2 +
 .../hadoop/hdds/scm/pipeline/PipelineFactory.java  |     6 +
 .../hadoop/hdds/scm/pipeline/PipelineManager.java  |     4 +-
 .../hadoop/hdds/scm/pipeline/PipelineProvider.java |     2 +
 .../hdds/scm/pipeline/PipelineReportHandler.java   |     3 +
 .../hdds/scm/pipeline/PipelineStateManager.java    |    19 +-
 .../hadoop/hdds/scm/pipeline/PipelineStateMap.java |    80 +-
 .../hdds/scm/pipeline/RatisPipelineProvider.java   |    14 +-
 .../hdds/scm/pipeline/RatisPipelineUtils.java      |   138 +-
 .../hdds/scm/pipeline/SCMPipelineManager.java      |     9 +-
 .../hdds/scm/pipeline/SimplePipelineProvider.java  |     5 +
 .../hdds/scm/server/SCMBlockProtocolServer.java    |     6 +-
 .../hadoop/hdds/scm/server/SCMCertStore.java       |   115 +
 .../hdds/scm/server/SCMClientProtocolServer.java   |     4 +-
 .../hadoop/hdds/scm/server/SCMConfigurator.java    |   202 +
 .../hdds/scm/server/SCMDatanodeProtocolServer.java |    12 +-
 .../hdds/scm/server/SCMSecurityProtocolServer.java |   211 +
 .../apache/hadoop/hdds/scm/server/SCMStorage.java  |    73 -
 .../hadoop/hdds/scm/server/SCMStorageConfig.java   |    73 +
 .../hdds/scm/server/StorageContainerManager.java   |   400 +-
 .../server/StorageContainerManagerHttpServer.java  |     5 +-
 .../org/apache/hadoop/hdds/scm/HddsTestUtils.java  |    25 +
 .../java/org/apache/hadoop/hdds/scm/TestUtils.java |    39 +
 .../hadoop/hdds/scm/block/TestBlockManager.java    |   200 +-
 .../hadoop/hdds/scm/block/TestDeletedBlockLog.java |    29 +-
 .../command/TestCommandStatusReportHandler.java    |     8 +-
 .../hadoop/hdds/scm/container/MockNodeManager.java |    31 +-
 .../scm/container/TestSCMContainerManager.java     |     4 +-
 .../TestSCMContainerPlacementCapacity.java         |     3 +
 .../TestSCMContainerPlacementRandom.java           |     3 +
 .../replication/TestReplicationManager.java        |    95 +-
 .../hadoop/hdds/scm/node/TestDeadNodeHandler.java  |   102 +-
 .../hdds/scm/node/TestNodeReportHandler.java       |     6 +-
 .../hadoop/hdds/scm/node/TestSCMNodeManager.java   |    81 +-
 .../scm/server/TestSCMSecurityProtocolServer.java  |    60 +
 .../testutils/ReplicationNodeManagerMock.java      |    11 +-
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml     |     2 +-
 .../org/apache/hadoop/fs/HdfsBlockLocation.java    |     4 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java     |    22 +-
 .../org/apache/hadoop/hdfs/DFSInputStream.java     |    12 +-
 .../apache/hadoop/hdfs/DFSOpsCountStatistics.java  |    14 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java    |     6 +-
 .../apache/hadoop/hdfs/DFSStripedOutputStream.java |     5 +-
 .../java/org/apache/hadoop/hdfs/DFSUtilClient.java |     8 +-
 .../apache/hadoop/hdfs/DistributedFileSystem.java  |    81 +-
 .../hadoop/hdfs/protocol/ClientProtocol.java       |    15 +-
 .../apache/hadoop/hdfs/protocol/DatanodeInfo.java  |    83 +-
 .../hadoop/hdfs/protocol/HdfsPathHandle.java       |     4 +-
 .../hadoop/hdfs/protocol/ReencryptionStatus.java   |    12 +-
 .../ClientNamenodeProtocolTranslatorPB.java        |     6 +-
 .../hadoop/hdfs/protocolPB/PBHelperClient.java     |    16 +-
 .../hdfs/server/namenode/ha/HAProxyFactory.java    |     9 +
 .../namenode/ha/ObserverReadProxyProvider.java     |     2 +-
 .../apache/hadoop/hdfs/util/StripedBlockUtil.java  |    10 +-
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  |     6 +
 .../hadoop/hdfs/web/resources/PutOpParam.java      |     1 +
 .../src/main/proto/ClientNamenodeProtocol.proto    |     1 +
 .../ha/TestRequestHedgingProxyProvider.java        |    43 +-
 .../hadoop/hdfs/web/TestByteRangeInputStream.java  |     4 +-
 .../apache/hadoop/hdfs/web/TestTokenAspect.java    |     9 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml     |     2 +-
 .../hadoop-hdfs-native-client/pom.xml              |     2 +-
 .../hadoop-hdfs-native-client/src/CMakeLists.txt   |    13 +-
 .../main/native/libhdfs-examples/CMakeLists.txt    |    34 +
 .../src/main/native/libhdfs-examples/README.md     |    24 +
 .../main/native/libhdfs-examples/libhdfs_read.c    |    77 +
 .../main/native/libhdfs-examples/libhdfs_write.c   |   104 +
 .../main/native/libhdfs-examples/test-libhdfs.sh   |   152 +
 .../main/native/libhdfs-tests/test_libhdfs_ops.c   |   119 +-
 .../main/native/libhdfs-tests/test_libhdfs_read.c  |    72 -
 .../main/native/libhdfs-tests/test_libhdfs_write.c |    99 -
 .../src/main/native/libhdfs/CMakeLists.txt         |     8 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml        |     2 +-
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml        |     2 +-
 .../server/federation/resolver/PathLocation.java   |     6 +-
 .../federation/router/ConnectionContext.java       |    10 +-
 .../federation/router/RouterClientProtocol.java    |     7 +-
 .../server/federation/router/RouterQuotaUsage.java |     4 +-
 .../server/federation/router/RouterRpcServer.java  |     5 +-
 .../federation/router/RouterWebHdfsMethods.java    |     1 +
 .../server/federation/FederationTestUtils.java     |     4 +-
 .../resolver/order/TestAvailableSpaceResolver.java |     2 +-
 .../resolver/order/TestLocalResolver.java          |     2 +-
 .../server/federation/router/TestRouterAdmin.java  |     2 +-
 .../federation/router/TestRouterAdminCLI.java      |     2 +-
 .../server/federation/router/TestRouterRpc.java    |     2 +-
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.1.2.xml |   676 +
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.2.0.xml |   674 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml            |     9 +-
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |     4 +
 .../main/java/org/apache/hadoop/hdfs/DFSUtil.java  |    17 -
 .../org/apache/hadoop/hdfs/NameNodeProxies.java    |   117 +-
 ...ientNamenodeProtocolServerSideTranslatorPB.java |     2 +-
 .../hdfs/qjournal/server/JournalNodeSyncer.java    |    25 +-
 .../token/block/BlockTokenSecretManager.java       |    16 +
 .../hadoop/hdfs/server/balancer/Dispatcher.java    |     6 +-
 .../hdfs/server/balancer/NameNodeConnector.java    |    11 +-
 .../hdfs/server/blockmanagement/BlockManager.java  |     3 +-
 .../blockmanagement/DatanodeAdminManager.java      |     3 +-
 .../server/blockmanagement/HeartbeatManager.java   |     3 +
 .../hdfs/server/common/ECTopologyVerifier.java     |    67 +-
 .../hadoop/hdfs/server/datanode/BlockReceiver.java |    40 +-
 .../hadoop/hdfs/server/datanode/DataNode.java      |    13 +-
 .../hadoop/hdfs/server/datanode/DataXceiver.java   |     3 +-
 .../hdfs/server/datanode/DataXceiverServer.java    |   415 +-
 .../hadoop/hdfs/server/datanode/DiskBalancer.java  |    21 +-
 .../hadoop/hdfs/server/datanode/VolumeScanner.java |    22 +-
 .../server/datanode/checker/AbstractFuture.java    |     1 +
 .../hdfs/server/datanode/checker/AsyncChecker.java |     2 +-
 .../datanode/checker/DatasetVolumeChecker.java     |     2 +-
 .../datanode/checker/StorageLocationChecker.java   |     2 +-
 .../datanode/checker/ThrottledAsyncChecker.java    |     8 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java     |     2 +-
 .../datanode/metrics/DataNodeDiskMetrics.java      |    78 +-
 .../server/diskbalancer/command/PlanCommand.java   |     3 +-
 .../hadoop/hdfs/server/namenode/Checkpointer.java  |    22 +-
 .../namenode/ContentSummaryComputationContext.java |     2 +-
 .../server/namenode/EncryptionZoneManager.java     |     7 +-
 .../namenode/ErasureCodingPolicyManager.java       |    22 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java     |    26 +-
 .../hadoop/hdfs/server/namenode/FSEditLog.java     |    20 +-
 .../hdfs/server/namenode/FSEditLogLoader.java      |     4 +-
 .../hadoop/hdfs/server/namenode/FSEditLogOp.java   |   734 +-
 .../server/namenode/FSImageFormatProtobuf.java     |     7 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |    52 +-
 .../hdfs/server/namenode/GlobalStateIdContext.java |     6 +-
 .../hadoop/hdfs/server/namenode/JournalSet.java    |     4 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java    |    72 +-
 .../hadoop/hdfs/server/namenode/NamenodeFsck.java  |    66 +-
 .../server/namenode/QuotaByStorageTypeEntry.java   |     6 +-
 .../namenode/RedundantEditLogInputStream.java      |     4 +-
 .../hdfs/server/namenode/StoragePolicySummary.java |    13 +-
 .../server/namenode/ha/NameNodeHAProxyFactory.java |     9 +-
 .../web/resources/NamenodeWebHdfsMethods.java      |    14 +-
 .../hdfs/server/protocol/BalancerProtocols.java    |    30 +
 .../hadoop/hdfs/server/protocol/ServerCommand.java |     6 +-
 .../hadoop/hdfs/tools/DFSZKFailoverController.java |     8 +-
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java   |     8 +-
 .../java/org/apache/hadoop/hdfs/tools/ECAdmin.java |    95 +-
 .../hadoop/hdfs/tools/StoragePolicyAdmin.java      |     7 +-
 .../tools/offlineImageViewer/FSImageHandler.java   |     7 +-
 .../tools/offlineImageViewer/FSImageLoader.java    |     8 +-
 .../offlineImageViewer/PBImageTextWriter.java      |     6 +-
 .../tools/offlineImageViewer/PBImageXmlWriter.java |     6 +-
 .../src/main/native/tests/test-libhdfs.sh          |   152 -
 .../src/main/resources/hdfs-default.xml            |    11 +
 .../src/main/webapps/hdfs/explorer.html            |     4 +-
 .../hadoop-hdfs/src/main/webapps/hdfs/explorer.js  |    28 +
 .../hadoop-hdfs/src/site/markdown/Federation.md    |     2 +-
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |     2 +
 .../src/site/markdown/HDFSErasureCoding.md         |     5 +
 .../hadoop-hdfs/src/site/markdown/HdfsDesign.md    |     5 +
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md       |    14 +
 .../fs/contract/hdfs/TestHDFSContractOpen.java     |     2 +-
 .../java/org/apache/hadoop/hdfs/DFSTestUtil.java   |     2 +-
 .../hadoop/hdfs/TestBlockTokenWrappingQOP.java     |   190 +
 .../apache/hadoop/hdfs/TestDFSClientFailover.java  |    11 +-
 .../apache/hadoop/hdfs/TestDFSClientRetries.java   |    29 +-
 .../apache/hadoop/hdfs/TestDFSOutputStream.java    |     4 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java     |     2 +-
 .../org/apache/hadoop/hdfs/TestDecommission.java   |    16 +-
 .../hadoop/hdfs/TestDecommissionWithStriped.java   |    16 +-
 .../hadoop/hdfs/TestDistributedFileSystem.java     |   164 +-
 .../apache/hadoop/hdfs/TestEncryptionZones.java    |    16 +-
 .../org/apache/hadoop/hdfs/TestFileAppend4.java    |    11 +-
 .../org/apache/hadoop/hdfs/TestFileCreation.java   |     2 +-
 .../java/org/apache/hadoop/hdfs/TestLease.java     |    19 +-
 .../org/apache/hadoop/hdfs/TestReplication.java    |     7 +-
 .../client/impl/TestBlockReaderIoProvider.java     |     4 +-
 .../client/impl/TestBlockReaderLocalMetrics.java   |    23 +-
 .../hadoop/hdfs/protocol/TestBlockListAsLongs.java |     8 +-
 .../client/TestQuorumJournalManagerUnit.java       |     6 +-
 .../hdfs/qjournal/server/TestJournalNodeSync.java  |    90 +-
 .../hdfs/security/token/block/TestBlockToken.java  |     6 +-
 .../balancer/TestBalancerWithHANameNodes.java      |   101 +-
 .../blockmanagement/TestBlockInfoStriped.java      |    45 +
 .../server/blockmanagement/TestBlockManager.java   |     2 +-
 .../blockmanagement/TestBlockManagerSafeMode.java  |     2 +-
 .../blockmanagement/TestCorruptReplicaInfo.java    |     2 +-
 .../blockmanagement/TestDatanodeManager.java       |     2 +-
 .../TestLowRedundancyBlockQueues.java              |     4 +-
 .../blockmanagement/TestReplicationPolicy.java     |     2 +-
 .../TestSortLocatedStripedBlock.java               |     6 +-
 .../hdfs/server/datanode/BlockReportTestBase.java  |    11 +-
 .../server/datanode/InternalDataNodeTestUtils.java |    16 +-
 .../hdfs/server/datanode/TestBPOfferService.java   |    73 +-
 .../datanode/TestBlockCountersInPendingIBR.java    |     4 +-
 .../hdfs/server/datanode/TestBlockRecovery.java    |    31 +-
 .../datanode/TestDataNodeHotSwapVolumes.java       |     4 +-
 .../hdfs/server/datanode/TestDataNodeLifeline.java |    11 +-
 .../datanode/TestDataNodeReconfiguration.java      |   120 +-
 .../datanode/TestDataXceiverBackwardsCompat.java   |    15 +-
 .../datanode/TestDataXceiverLazyPersistHint.java   |    20 +-
 .../datanode/TestDatanodeProtocolRetryPolicy.java  |    20 +-
 .../TestDnRespectsBlockReportSplitThreshold.java   |    10 +-
 .../hdfs/server/datanode/TestFsDatasetCache.java   |     8 +-
 .../datanode/TestIncrementalBlockReports.java      |     4 +-
 .../hdfs/server/datanode/TestStorageReport.java    |     9 +-
 .../server/datanode/TestTriggerBlockReport.java    |    10 +-
 .../datanode/checker/TestDatasetVolumeChecker.java |    10 +-
 .../checker/TestDatasetVolumeCheckerFailures.java  |    20 +-
 .../checker/TestDatasetVolumeCheckerTimeout.java   |    23 +-
 .../checker/TestThrottledAsyncChecker.java         |     2 +-
 .../checker/TestThrottledAsyncCheckerTimeout.java  |    29 +-
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java |    17 +-
 .../hdfs/server/diskbalancer/TestDiskBalancer.java |    65 +-
 .../hdfs/server/namenode/FSImageTestUtil.java      |     4 +-
 .../server/namenode/NNThroughputBenchmark.java     |     5 +-
 .../hdfs/server/namenode/TestAddBlockRetry.java    |     4 +-
 .../hdfs/server/namenode/TestAddStripedBlocks.java |   116 +-
 .../hdfs/server/namenode/TestAuditLogAtDebug.java  |     8 +-
 .../hdfs/server/namenode/TestAuditLogger.java      |     2 +-
 .../namenode/TestAuditLoggerWithCommands.java      |     2 +-
 .../TestBlockPlacementPolicyRackFaultTolerant.java |     7 +-
 .../hdfs/server/namenode/TestCheckpoint.java       |    12 +-
 .../namenode/TestCommitBlockSynchronization.java   |     4 +-
 .../namenode/TestDefaultBlockPlacementPolicy.java  |     4 +-
 .../hdfs/server/namenode/TestDeleteRace.java       |     7 +-
 .../server/namenode/TestDiskspaceQuotaUpdate.java  |    10 +-
 .../namenode/TestEditLogFileInputStream.java       |     2 +-
 .../namenode/TestEditLogJournalFailures.java       |     6 +-
 .../hdfs/server/namenode/TestFSDirWriteFileOp.java |    14 +-
 .../server/namenode/TestFSPermissionChecker.java   |     2 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java      |     7 +-
 .../hdfs/server/namenode/TestGetImageServlet.java  |     9 +-
 .../namenode/TestNNStorageRetentionManager.java    |     4 +-
 .../server/namenode/TestNamenodeRetryCache.java    |     8 +-
 .../hdfs/server/namenode/TestSaveNamespace.java    |    20 +-
 .../hdfs/server/namenode/TestStorageRestore.java   |     4 +-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java |    12 +-
 .../namenode/ha/TestConsistentReadsObserver.java   |    47 +-
 .../hdfs/server/namenode/ha/TestDNFencing.java     |    13 +-
 .../server/namenode/ha/TestFailureToReadEdits.java |    10 +-
 .../hdfs/server/namenode/ha/TestObserverNode.java  |     6 +-
 .../server/namenode/ha/TestPipelinesFailover.java  |     7 +-
 .../server/namenode/ha/TestRetryCacheWithHA.java   |     2 +-
 .../server/namenode/ha/TestStandbyCheckpoints.java |    16 +-
 .../namenode/metrics/TestNameNodeMetrics.java      |     4 +-
 .../namenode/snapshot/TestRenameWithSnapshots.java |    38 +-
 .../namenode/snapshot/TestSnapshotManager.java     |     4 +-
 .../org/apache/hadoop/hdfs/tools/TestDFSAdmin.java |     2 +-
 .../hdfs/tools/TestDelegationTokenFetcher.java     |     7 +-
 .../org/apache/hadoop/hdfs/tools/TestECAdmin.java  |   178 +-
 .../TestStoragePolicySatisfyAdminCommands.java     |    22 +
 .../offlineImageViewer/TestOfflineImageViewer.java |    19 +
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java    |    37 +-
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  |     3 +-
 .../src/test/resources/testErasureCodingConf.xml   |    40 +
 .../jdiff/Apache_Hadoop_MapReduce_Core_3.1.2.xml   | 28085 +++++++++++++++
 .../Apache_Hadoop_MapReduce_JobClient_3.1.2.xml    |    16 +
 .../hadoop/mapred/TestLocalContainerLauncher.java  |     2 +-
 .../hadoop/mapred/TestTaskAttemptListenerImpl.java |    14 +-
 .../org/apache/hadoop/mapred/TestYarnChild.java    |     2 +-
 .../jobhistory/TestJobHistoryEventHandler.java     |     2 +-
 .../v2/app/TestKillAMPreemptionPolicy.java         |    14 +-
 .../hadoop/mapreduce/v2/app/TestMRAppMaster.java   |     2 +-
 .../mapreduce/v2/app/TestStagingCleanup.java       |     4 +-
 .../mapreduce/v2/app/TestTaskHeartbeatHandler.java |     2 +-
 .../v2/app/commit/TestCommitterEventHandler.java   |    10 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java     |     2 +-
 .../v2/app/launcher/TestContainerLauncher.java     |     9 +
 .../v2/app/launcher/TestContainerLauncherImpl.java |    11 +-
 .../v2/app/local/TestLocalContainerAllocator.java  |     2 +-
 .../mapreduce/v2/app/rm/TestRMCommunicator.java    |    15 +-
 .../v2/app/rm/TestRMContainerAllocator.java        |     8 +-
 .../mapred/TestLocalDistributedCacheManager.java   |     4 +-
 .../org/apache/hadoop/mapred/LineRecordReader.java |    12 +-
 .../main/java/org/apache/hadoop/mapreduce/Job.java |    10 +-
 .../org/apache/hadoop/mapreduce/MRJobConfig.java   |    14 +
 .../lib/input/FixedLengthRecordReader.java         |    14 +-
 .../mapreduce/lib/input/LineRecordReader.java      |    12 +-
 .../mapreduce/lib/input/NLineInputFormat.java      |    12 +-
 .../markdown/PluggableShuffleAndPluggableSort.md   |     5 +-
 .../java/org/apache/hadoop/mapred/TestMapTask.java |     2 +-
 .../java/org/apache/hadoop/mapred/TestTask.java    |     4 +-
 .../hadoop/mapreduce/TestJobMonitorAndPrint.java   |     6 +-
 .../hadoop/mapreduce/TestJobResourceUploader.java  |     4 +-
 .../TestJobResourceUploaderWithSharedCache.java    |     6 +-
 .../hadoop/mapreduce/lib/db/DriverForTest.java     |     2 +-
 .../hadoop/mapreduce/security/TestTokenCache.java  |     2 +-
 .../hadoop/mapreduce/task/reduce/TestFetcher.java  |     1 -
 .../hadoop/mapreduce/task/reduce/TestMerger.java   |     2 +-
 .../org/apache/hadoop/mapreduce/tools/TestCLI.java |     2 +-
 ...stHistoryServerFileSystemStateStoreService.java |    11 +-
 .../hadoop/mapreduce/v2/hs/TestJobHistory.java     |     4 +-
 .../hadoop/mapreduce/v2/hs/webapp/TestBlocks.java  |     2 +-
 .../mapreduce/v2/hs/webapp/TestHsJobBlock.java     |     2 +-
 .../test/java/org/apache/hadoop/fs/TestDFSIO.java  |     1 +
 .../apache/hadoop/mapred/JobClientUnitTest.java    |     2 +-
 .../hadoop/mapred/TestClientServiceDelegate.java   |     2 +-
 .../hadoop/mapred/TestMRCJCFileInputFormat.java    |     2 +-
 .../org/apache/hadoop/mapred/TestYARNRunner.java   |     2 +-
 .../mapreduce/TestYarnClientProtocolProvider.java  |     2 +-
 .../TestUmbilicalProtocolWithJobToken.java         |     7 +-
 .../hadoop-mapreduce-client-nativetask/pom.xml     |     2 +-
 .../nativetask/handlers/TestCombineHandler.java    |    11 +-
 .../handlers/TestNativeCollectorOnlyHandler.java   |     8 +-
 .../mapred/nativetask/serde/TestKVSerializer.java  |    28 +-
 .../apache/hadoop/mapred/TestShuffleHandler.java   |     2 +-
 .../hadoop-mapreduce-client/pom.xml                |     2 +-
 hadoop-mapreduce-project/pom.xml                   |     2 +-
 hadoop-ozone/Jenkinsfile                           |   116 +
 .../org/apache/hadoop/ozone/client/BucketArgs.java |    54 +-
 .../apache/hadoop/ozone/client/ObjectStore.java    |    57 +-
 .../apache/hadoop/ozone/client/OzoneBucket.java    |   117 +-
 .../hadoop/ozone/client/OzoneClientUtils.java      |     4 +
 .../hadoop/ozone/client/OzoneKeyDetails.java       |    19 +-
 .../client/OzoneMultipartUploadPartListParts.java  |   107 +
 .../apache/hadoop/ozone/client/OzoneVolume.java    |    25 +-
 .../org/apache/hadoop/ozone/client/VolumeArgs.java |    28 +-
 .../ozone/client/io/BlockOutputStreamEntry.java    |   341 +
 .../ozone/client/io/ChunkGroupInputStream.java     |   333 -
 .../hadoop/ozone/client/io/KeyInputStream.java     |   338 +
 .../hadoop/ozone/client/io/KeyOutputStream.java    |   329 +-
 .../hadoop/ozone/client/io/OzoneInputStream.java   |     6 +-
 .../ozone/client/protocol/ClientProtocol.java      |   103 +-
 .../hadoop/ozone/client/rest/RestClient.java       |    91 +-
 .../hadoop/ozone/client/rpc/OzoneKMSUtil.java      |   176 +
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |   263 +-
 .../hadoop/ozone/client/TestHddsClientUtils.java   |     2 +-
 hadoop-ozone/common/pom.xml                        |     4 +
 hadoop-ozone/common/src/main/bin/ozone             |    22 +-
 hadoop-ozone/common/src/main/bin/start-ozone.sh    |    16 +-
 hadoop-ozone/common/src/main/bin/stop-ozone.sh     |    13 +-
 .../apache/hadoop/hdds/protocol/package-info.java  |    20 +
 .../main/java/org/apache/hadoop/ozone/OmUtils.java |   282 +-
 .../ozone/OzoneIllegalArgumentException.java       |    40 +
 .../org/apache/hadoop/ozone/audit/OMAction.java    |     4 +-
 .../ozone/client/rest/response/BucketInfo.java     |    17 +
 .../ozone/client/rest/response/KeyInfoDetails.java |    11 +
 .../apache/hadoop/ozone/freon/OzoneGetConf.java    |     2 +
 .../org/apache/hadoop/ozone/om/OMConfigKeys.java   |    68 +-
 .../apache/hadoop/ozone/om/OMMetadataManager.java  |     7 +
 .../apache/hadoop/ozone/om/OzoneManagerLock.java   |    24 +-
 .../hadoop/ozone/om/codec/OmBucketInfoCodec.java   |     8 +-
 .../hadoop/ozone/om/codec/OmKeyInfoCodec.java      |     8 +-
 .../ozone/om/codec/OmMultipartKeyInfoCodec.java    |     9 +-
 .../hadoop/ozone/om/codec/OmVolumeArgsCodec.java   |     8 +-
 .../hadoop/ozone/om/codec/VolumeListCodec.java     |     8 +-
 .../hadoop/ozone/om/exceptions/OMException.java    |   189 +
 .../hadoop/ozone/om/exceptions/package-info.java   |     0
 .../ozone/om/helpers/BucketEncryptionKeyInfo.java  |    79 +
 .../ozone/om/helpers/EncryptionBucketInfo.java     |   114 +
 .../hadoop/ozone/om/helpers/KeyValueUtil.java      |    54 +
 .../hadoop/ozone/om/helpers/OmBucketArgs.java      |    24 +-
 .../hadoop/ozone/om/helpers/OmBucketInfo.java      |   131 +-
 .../apache/hadoop/ozone/om/helpers/OmKeyArgs.java  |    29 +-
 .../apache/hadoop/ozone/om/helpers/OmKeyInfo.java  |    93 +-
 .../hadoop/ozone/om/helpers/OmKeyLocationInfo.java |    44 +-
 .../ozone/om/helpers/OmMultipartKeyInfo.java       |     7 +-
 .../om/helpers/OmMultipartUploadCompleteInfo.java  |    70 +
 .../ozone/om/helpers/OmMultipartUploadList.java    |    63 +
 .../om/helpers/OmMultipartUploadListParts.java     |    84 +
 .../apache/hadoop/ozone/om/helpers/OmPartInfo.java |    60 +
 .../hadoop/ozone/om/helpers/OmVolumeArgs.java      |    68 +-
 .../hadoop/ozone/om/helpers/S3SecretValue.java     |    81 +
 .../hadoop/ozone/om/helpers/ServiceInfo.java       |    14 +-
 .../hadoop/ozone/om/helpers/WithMetadata.java      |    45 +
 .../ozone/om/protocol/OzoneManagerProtocol.java    |    63 +-
 .../om/protocol/OzoneManagerSecurityProtocol.java  |    67 +
 ...OzoneManagerProtocolClientSideTranslatorPB.java |   674 +-
 .../om/protocolPB/OzoneManagerProtocolPB.java      |     7 +
 .../apache/hadoop/ozone/protocolPB/OMPBHelper.java |   167 +
 .../security/OzoneBlockTokenSecretManager.java     |   194 +
 .../OzoneDelegationTokenSecretManager.java         |   470 +
 .../security/OzoneDelegationTokenSelector.java     |    51 +
 .../hadoop/ozone/security/OzoneSecretKey.java      |   176 +
 .../hadoop/ozone/security/OzoneSecretManager.java  |   280 +
 .../hadoop/ozone/security/OzoneSecretStore.java    |   249 +
 .../ozone/security/OzoneSecurityException.java     |   104 +
 .../ozone/security/OzoneTokenIdentifier.java       |   217 +
 .../apache/hadoop/ozone/security/package-info.java |    21 +
 .../apache/hadoop/ozone/util/OzoneVersionInfo.java |   174 +-
 .../org/apache/hadoop/ozone/util/package-info.java |    22 +
 .../hadoop/ozone/web/handlers/VolumeArgs.java      |     1 +
 .../hadoop/ozone/web/response/BucketInfo.java      |    10 +
 .../src/main/proto/OzoneManagerProtocol.proto      |   253 +-
 .../java/org/apache/hadoop/ozone/TestOmUtils.java  |    44 +
 .../om/codec/TestOmMultipartKeyInfoCodec.java      |    18 +-
 .../ozone/om/exceptions/TestResultCodes.java       |    49 +
 .../hadoop/ozone/om/helpers/TestOmBucketInfo.java  |    46 +
 .../hadoop/ozone/om/helpers/TestOmKeyInfo.java     |    52 +
 .../hadoop/ozone/om/helpers/package-info.java      |    21 +
 .../security/TestOzoneBlockTokenSecretManager.java |   147 +
 .../TestOzoneDelegationTokenSecretManager.java     |   217 +
 .../ozone/security/acl/TestOzoneObjInfo.java       |    11 +-
 hadoop-ozone/datanode/pom.xml                      |    22 +
 hadoop-ozone/dev-support/checks/acceptance.sh      |    18 +
 hadoop-ozone/dev-support/checks/author.sh          |    22 +
 hadoop-ozone/dev-support/checks/build.sh           |    18 +
 hadoop-ozone/dev-support/checks/checkstyle.sh      |    23 +
 hadoop-ozone/dev-support/checks/findbugs.sh        |    34 +
 hadoop-ozone/dev-support/checks/isolation.sh       |    24 +
 hadoop-ozone/dev-support/checks/rat.sh             |    24 +
 hadoop-ozone/dev-support/checks/unit.sh            |    24 +
 hadoop-ozone/dev-support/docker/Dockerfile         |    66 +
 .../dist/dev-support/bin/dist-layout-stitching     |     6 +-
 hadoop-ozone/dist/pom.xml                          |    33 +-
 hadoop-ozone/dist/src/main/blockade/README.md      |    20 +-
 .../src/main/blockade/blockadeUtils/blockade.py    |    44 +-
 .../main/blockade/clusterUtils/cluster_utils.py    |   261 +-
 hadoop-ozone/dist/src/main/blockade/conftest.py    |    53 +-
 .../dist/src/main/blockade/test_blockade.py        |    54 -
 .../main/blockade/test_blockade_client_failure.py  |   124 +
 .../blockade/test_blockade_datanode_isolation.py   |   111 +
 .../dist/src/main/blockade/test_blockade_flaky.py  |    61 +
 .../main/blockade/test_blockade_mixed_failure.py   |   117 +
 ...t_blockade_mixed_failure_three_nodes_isolate.py |   144 +
 .../test_blockade_mixed_failure_two_nodes.py       |   121 +
 .../main/blockade/test_blockade_scm_isolation.py   |   111 +
 .../main/compose/ozone-hdfs/docker-compose.yaml    |     1 +
 .../src/main/compose/ozone/docker-compose.yaml     |     4 +
 .../dist/src/main/compose/ozone/docker-config      |     2 +
 .../main/compose/ozoneblockade/docker-compose.yaml |    58 +
 .../src/main/compose/ozoneblockade/docker-config   |    77 +
 .../src/main/compose/ozonefs/docker-compose.yaml   |    30 +-
 .../src/main/compose/ozoneperf/docker-compose.yaml |     1 +
 .../src/main/compose/ozones3/docker-compose.yaml   |     1 +
 .../dist/src/main/compose/ozonesecure/.env         |    18 +
 .../dist/src/main/compose/ozonesecure/README.md    |    22 +
 .../main/compose/ozonesecure/docker-compose.yaml   |    84 +
 .../src/main/compose/ozonesecure/docker-config     |   106 +
 .../docker-image/docker-krb5/Dockerfile-krb5       |    34 +
 .../ozonesecure/docker-image/docker-krb5/README.md |    34 +
 .../ozonesecure/docker-image/docker-krb5/kadm5.acl |    20 +
 .../ozonesecure/docker-image/docker-krb5/krb5.conf |    40 +
 .../docker-image/docker-krb5/launcher.sh           |    25 +
 .../ozonesecure/docker-image/runner/Dockerfile     |    39 +
 .../ozonesecure/docker-image/runner/build.sh       |    26 +
 .../docker-image/runner/scripts/envtoconf.py       |   115 +
 .../docker-image/runner/scripts/krb5.conf          |    38 +
 .../docker-image/runner/scripts/starter.sh         |   100 +
 .../docker-image/runner/scripts/transformation.py  |   150 +
 .../main/compose/ozonetrace/docker-compose.yaml    |    65 +
 .../dist/src/main/compose/ozonetrace/docker-config |    84 +
 .../main/smoketest/auditparser/auditparser.robot   |    40 +
 .../src/main/smoketest/basic/ozone-shell.robot     |     5 +-
 .../dist/src/main/smoketest/ozonefs/ozonefs.robot  |     2 +-
 .../src/main/smoketest/s3/MultipartUpload.robot    |   207 +
 .../dist/src/main/smoketest/s3/awss3.robot         |     4 +-
 .../dist/src/main/smoketest/s3/commonawslib.robot  |     5 +
 .../dist/src/main/smoketest/s3/objectcopy.robot    |    12 +-
 .../dist/src/main/smoketest/s3/objectdelete.robot  |     6 +-
 .../src/main/smoketest/s3/objectmultidelete.robot  |    14 +-
 .../dist/src/main/smoketest/s3/objectputget.robot  |     4 +-
 .../dist/src/main/smoketest/s3/webui.robot         |    34 +
 .../src/main/smoketest/security/ozone-secure.robot |   111 +
 hadoop-ozone/dist/src/main/smoketest/test.sh       |    56 +-
 hadoop-ozone/integration-test/pom.xml              |    18 +-
 .../TestContainerStateManagerIntegration.java      |   104 +-
 .../hdds/scm/pipeline/TestNode2PipelineMap.java    |    14 +-
 .../hdds/scm/pipeline/TestPipelineClose.java       |     4 +-
 .../scm/pipeline/TestPipelineStateManager.java     |    42 +
 .../scm/pipeline/TestRatisPipelineProvider.java    |    43 +-
 .../hdds/scm/pipeline/TestRatisPipelineUtils.java  |   101 +
 .../hadoop/hdds/scm/pipeline/TestSCMRestart.java   |     2 +-
 .../org/apache/hadoop/ozone/MiniOzoneCluster.java  |    46 +-
 .../apache/hadoop/ozone/MiniOzoneClusterImpl.java  |    73 +-
 .../hadoop/ozone/MiniOzoneHAClusterImpl.java       |   261 +
 .../org/apache/hadoop/ozone/OzoneTestUtils.java    |    23 +-
 .../org/apache/hadoop/ozone/RatisTestHelper.java   |     7 +-
 .../TestContainerStateMachineIdempotency.java      |    11 +-
 .../hadoop/ozone/TestOzoneConfigurationFields.java |    11 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java       |   646 +
 .../hadoop/ozone/TestStorageContainerManager.java  |    29 +-
 .../ozone/client/CertificateClientTestImpl.java    |   151 +
 .../apache/hadoop/ozone/client/package-info.java   |    20 +
 .../ozone/client/rest/TestOzoneRestClient.java     |   498 -
 .../hadoop/ozone/client/rest/package-info.java     |    23 -
 .../apache/hadoop/ozone/client/rpc/TestBCSID.java  |     3 +-
 .../client/rpc/TestContainerStateMachine.java      |   152 +
 .../rpc/TestContainerStateMachineFailures.java     |     3 +-
 .../client/rpc/TestHybridPipelineOnDatanode.java   |   166 +
 .../client/rpc/TestOzoneAtRestEncryption.java      |   238 +
 .../ozone/client/rpc/TestOzoneRpcClient.java       |  1487 +-
 .../client/rpc/TestOzoneRpcClientAbstract.java     |  2103 ++
 .../client/rpc/TestOzoneRpcClientWithRatis.java    |    58 +
 .../hadoop/ozone/client/rpc/TestReadRetries.java   |   223 +
 .../ozone/client/rpc/TestSecureOzoneRpcClient.java |   239 +
 .../ozone/container/ContainerTestHelper.java       |    90 +-
 .../container/common/TestBlockDeletingService.java |     2 +-
 .../impl/TestContainerDeletionChoosingPolicy.java  |     5 +-
 .../common/impl/TestContainerPersistence.java      |    18 +-
 .../commandhandler/TestBlockDeletion.java          |     7 +-
 .../TestCloseContainerByPipeline.java              |    21 +-
 .../commandhandler/TestCloseContainerHandler.java  |     3 +-
 .../commandhandler/TestDeleteContainerHandler.java |   277 +
 .../statemachine/commandhandler/package-info.java  |    21 +
 .../container/ozoneimpl/TestOzoneContainer.java    |    45 +-
 .../ozoneimpl/TestOzoneContainerWithTLS.java       |   190 +
 .../ozoneimpl/TestSecureOzoneContainer.java        |   227 +
 .../container/server/TestContainerServer.java      |    46 +-
 .../server/TestSecureContainerServer.java          |   239 +
 .../ozone/om/TestContainerReportWithKeys.java      |     3 +-
 .../ozone/om/TestMultipleContainerReadWrite.java   |     2 +-
 .../org/apache/hadoop/ozone/om/TestOmAcls.java     |    27 +-
 .../apache/hadoop/ozone/om/TestOzoneManager.java   |   249 +-
 .../ozone/om/TestOzoneManagerConfiguration.java    |   343 +
 .../apache/hadoop/ozone/om/TestOzoneManagerHA.java |   156 +
 .../apache/hadoop/ozone/om/TestScmChillMode.java   |    22 +-
 .../ozone/ozShell/TestOzoneDatanodeShell.java      |    36 +-
 .../hadoop/ozone/ozShell/TestOzoneShell.java       |   301 +-
 .../hadoop/ozone/scm/TestAllocateContainer.java    |     9 +-
 .../hadoop/ozone/scm/TestContainerSmallFile.java   |    16 +-
 .../scm/TestGetCommittedBlockLengthAndPutKey.java  |    14 +-
 .../hadoop/ozone/scm/TestXceiverClientManager.java |    71 +-
 .../hadoop/ozone/scm/TestXceiverClientMetrics.java |     6 +
 .../ozone/web/TestOzoneRestWithMiniCluster.java    |     7 +-
 .../hadoop/ozone/web/client/TestBuckets.java       |    43 +-
 .../hadoop/ozone/web/client/TestBucketsRatis.java  |     1 +
 .../apache/hadoop/ozone/web/client/TestKeys.java   |    91 +-
 .../apache/hadoop/ozone/web/client/TestVolume.java |   131 +-
 .../hadoop/ozone/web/client/TestVolumeRatis.java   |     3 +-
 .../integration-test/src/test/resources/ssl/ca.crt |    27 +
 .../integration-test/src/test/resources/ssl/ca.key |    54 +
 .../src/test/resources/ssl/client.crt              |    27 +
 .../src/test/resources/ssl/client.csr              |    26 +
 .../src/test/resources/ssl/client.key              |    51 +
 .../src/test/resources/ssl/client.pem              |    52 +
 .../src/test/resources/ssl/generate.sh             |    34 +
 .../src/test/resources/ssl/server.crt              |    27 +
 .../src/test/resources/ssl/server.csr              |    26 +
 .../src/test/resources/ssl/server.key              |    51 +
 .../src/test/resources/ssl/server.pem              |    52 +
 hadoop-ozone/objectstore-service/pom.xml           |     6 +
 .../hadoop/ozone/web/OzoneHddsDatanodeService.java |     4 +-
 .../apache/hadoop/ozone/web/interfaces/Bucket.java |     1 +
 .../apache/hadoop/ozone/web/interfaces/Volume.java |     1 +
 .../web/storage/DistributedStorageHandler.java     |    26 +-
 hadoop-ozone/ozone-manager/pom.xml                 |     1 -
 .../org/apache/hadoop/ozone/om/BucketManager.java  |     2 +-
 .../apache/hadoop/ozone/om/BucketManagerImpl.java  |    69 +-
 .../org/apache/hadoop/ozone/om/KeyManager.java     |    44 +-
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |   550 +-
 .../hadoop/ozone/om/OMDbSnapshotServlet.java       |   142 +
 .../java/org/apache/hadoop/ozone/om/OMMetrics.java |    41 +
 .../org/apache/hadoop/ozone/om/OMNodeDetails.java  |   111 +
 .../java/org/apache/hadoop/ozone/om/OMStorage.java |     3 +-
 .../hadoop/ozone/om/OmMetadataManagerImpl.java     |    41 +-
 .../org/apache/hadoop/ozone/om/OzoneManager.java   |   916 +-
 .../hadoop/ozone/om/OzoneManagerHttpServer.java    |     6 +-
 .../hadoop/ozone/om/S3BucketManagerImpl.java       |     5 +-
 .../apache/hadoop/ozone/om/S3SecretManager.java    |    30 +
 .../hadoop/ozone/om/S3SecretManagerImpl.java       |    82 +
 .../org/apache/hadoop/ozone/om/VolumeManager.java  |     2 +-
 .../apache/hadoop/ozone/om/VolumeManagerImpl.java  |    22 +-
 .../hadoop/ozone/om/exceptions/OMException.java    |   123 -
 .../hadoop/ozone/om/ratis/OMRatisHelper.java       |   117 +
 .../ozone/om/ratis/OzoneManagerRatisClient.java    |   181 +
 .../ozone/om/ratis/OzoneManagerRatisServer.java    |   192 +-
 .../ozone/om/ratis/OzoneManagerStateMachine.java   |   157 +
 ...OzoneManagerProtocolServerSideTranslatorPB.java |   805 +-
 .../protocolPB/OzoneManagerRequestHandler.java     |   879 +
 .../org/apache/hadoop/ozone/web/ozShell/Shell.java |    35 +-
 .../web/ozShell/bucket/CreateBucketHandler.java    |    32 +-
 .../hadoop/ozone/web/ozShell/keys/KeyCommands.java |     1 +
 .../ozone/web/ozShell/keys/PutKeyHandler.java      |     3 +-
 .../ozone/web/ozShell/keys/RenameKeyHandler.java   |    73 +
 .../ozone/web/ozShell/s3/GetS3SecretHandler.java   |    49 +
 .../hadoop/ozone/web/ozShell/s3/S3Commands.java    |    60 +
 .../hadoop/ozone/web/ozShell/s3/package-info.java  |    21 +
 .../web/ozShell/token/CancelTokenHandler.java      |    72 +
 .../ozone/web/ozShell/token/GetTokenHandler.java   |    77 +
 .../ozone/web/ozShell/token/PrintTokenHandler.java |    71 +
 .../ozone/web/ozShell/token/RenewTokenHandler.java |    75 +
 .../ozone/web/ozShell/token/TokenCommands.java     |    64 +
 .../ozone/web/ozShell/token/package-info.java      |    26 +
 .../web/ozShell/volume/CreateVolumeHandler.java    |     2 +-
 .../web/ozShell/volume/ListVolumeHandler.java      |     3 +-
 .../hadoop/ozone/om/TestBucketManagerImpl.java     |    52 +-
 .../apache/hadoop/ozone/om/TestChunkStreams.java   |    18 +-
 .../hadoop/ozone/om/TestKeyDeletingService.java    |     6 +-
 .../apache/hadoop/ozone/om/TestKeyManagerImpl.java |     2 +-
 .../om/ratis/TestOzoneManagerRatisServer.java      |   145 +-
 .../ozone/security/TestOzoneManagerBlockToken.java |   251 +
 .../ozone/security/TestOzoneTokenIdentifier.java   |   300 +
 .../apache/hadoop/ozone/security/package-info.java |    21 +
 hadoop-ozone/ozonefs-lib-legacy/pom.xml            |   111 +
 .../src/main/resources/ozonefs.txt                 |    21 +
 hadoop-ozone/ozonefs-lib/pom.xml                   |    96 +
 hadoop-ozone/ozonefs/pom.xml                       |    70 +-
 .../org/apache/hadoop/fs/ozone/BasicKeyInfo.java   |    53 +
 .../hadoop/fs/ozone/FilteredClassLoader.java       |    86 +
 .../apache/hadoop/fs/ozone/OzoneClientAdapter.java |    55 +
 .../hadoop/fs/ozone/OzoneClientAdapterFactory.java |   122 +
 .../hadoop/fs/ozone/OzoneClientAdapterImpl.java    |   246 +
 .../apache/hadoop/fs/ozone/OzoneFSInputStream.java |    16 +-
 .../hadoop/fs/ozone/OzoneFSOutputStream.java       |     5 +-
 .../hadoop/fs/ozone/OzoneFSStorageStatistics.java  |   126 +
 .../apache/hadoop/fs/ozone/OzoneFileSystem.java    |   262 +-
 .../java/org/apache/hadoop/fs/ozone/Statistic.java |   119 +
 .../hadoop/fs/ozone/TestOzoneFileInterfaces.java   |    33 +-
 .../hadoop/fs/ozone/TestOzoneFileSystem.java       |    66 +
 hadoop-ozone/pom.xml                               |    60 +-
 hadoop-ozone/s3gateway/pom.xml                     |    65 +-
 .../apache/hadoop/ozone/s3/HeaderPreprocessor.java |     8 +
 .../hadoop/ozone/s3/SignedChunksInputStream.java   |     2 +-
 .../hadoop/ozone/s3/endpoint/BucketEndpoint.java   |    39 +-
 .../endpoint/CompleteMultipartUploadRequest.java   |    77 +
 .../endpoint/CompleteMultipartUploadResponse.java  |    78 +
 .../hadoop/ozone/s3/endpoint/EndpointBase.java     |    30 +-
 .../ozone/s3/endpoint/ListPartsResponse.java       |   196 +
 .../endpoint/MultipartUploadInitiateResponse.java  |    69 +
 .../hadoop/ozone/s3/endpoint/ObjectEndpoint.java   |   340 +-
 .../hadoop/ozone/s3/endpoint/RootEndpoint.java     |    17 +-
 .../hadoop/ozone/s3/exception/S3ErrorTable.java    |    21 +
 .../s3/header/AuthenticationHeaderParser.java      |     4 +
 .../hadoop/ozone/s3/io/S3WrapperInputStream.java   |    10 +-
 .../apache/hadoop/ozone/s3/util/ContinueToken.java |   173 +
 .../ozone/s3/util/RangeHeaderParserUtil.java       |    95 +
 .../org/apache/hadoop/ozone/s3/util/S3utils.java   |   159 -
 .../resources/webapps/s3gateway/WEB-INF/web.xml    |     4 -
 .../src/main/resources/webapps/static/index.html   |    79 +
 .../hadoop/ozone/client/ObjectStoreStub.java       |    28 +-
 .../hadoop/ozone/client/OzoneBucketStub.java       |   170 +-
 .../hadoop/ozone/client/OzoneOutputStreamStub.java |    73 +
 .../s3/endpoint/TestAbortMultipartUpload.java      |    83 +
 .../hadoop/ozone/s3/endpoint/TestBucketGet.java    |    49 +-
 .../s3/endpoint/TestInitiateMultipartUpload.java   |    79 +
 .../hadoop/ozone/s3/endpoint/TestListParts.java    |   129 +
 .../s3/endpoint/TestMultipartUploadComplete.java   |   222 +
 .../hadoop/ozone/s3/endpoint/TestObjectDelete.java |     2 +-
 .../hadoop/ozone/s3/endpoint/TestObjectGet.java    |     2 +-
 .../hadoop/ozone/s3/endpoint/TestObjectHead.java   |     3 +-
 .../ozone/s3/endpoint/TestObjectMultiDelete.java   |     1 -
 .../hadoop/ozone/s3/endpoint/TestObjectPut.java    |    26 +-
 .../hadoop/ozone/s3/endpoint/TestPartUpload.java   |   126 +
 .../hadoop/ozone/s3/endpoint/TestRootList.java     |    13 +-
 .../hadoop/ozone/s3/util/TestContinueToken.java    |    50 +
 .../ozone/s3/util/TestRangeHeaderParserUtil.java   |    93 +
 .../apache/hadoop/ozone/s3/util/TestS3utils.java   |    93 -
 hadoop-ozone/tools/pom.xml                         |    16 +-
 .../hadoop/ozone/audit/parser/AuditParser.java     |    55 +
 .../ozone/audit/parser/common/DatabaseHelper.java  |   245 +
 .../ozone/audit/parser/common/ParserConsts.java    |    35 +
 .../ozone/audit/parser/common/package-info.java    |    20 +
 .../audit/parser/handler/LoadCommandHandler.java   |    52 +
 .../audit/parser/handler/QueryCommandHandler.java  |    57 +
 .../parser/handler/TemplateCommandHandler.java     |    61 +
 .../ozone/audit/parser/handler/package-info.java   |    20 +
 .../ozone/audit/parser/model/AuditEntry.java       |   188 +
 .../ozone/audit/parser/model/package-info.java     |    20 +
 .../hadoop/ozone/audit/parser/package-info.java    |    20 +
 .../java/org/apache/hadoop/ozone/freon/Freon.java  |     7 +
 .../hadoop/ozone/freon/RandomKeyGenerator.java     |    62 +-
 .../apache/hadoop/ozone/fsck/BlockIdDetails.java   |    83 +
 .../apache/hadoop/ozone/fsck/ContainerMapper.java  |   134 +
 .../org/apache/hadoop/ozone/fsck/package-info.java |    44 +
 .../ozone/genesis/BenchMarkBlockManager.java       |   167 +
 .../ozone/genesis/BenchMarkMetadataStoreReads.java |     3 +
 .../genesis/BenchMarkMetadataStoreWrites.java      |     4 +-
 .../ozone/genesis/BenchMarkOMKeyAllocation.java    |   135 +
 .../ozone/genesis/BenchMarkRocksDbStore.java       |     6 +-
 .../org/apache/hadoop/ozone/genesis/Genesis.java   |     2 +
 .../apache/hadoop/ozone/genesis/GenesisUtil.java   |     8 +-
 .../tools/src/main/resources/commands.properties   |    22 +
 .../hadoop/ozone/audit/parser/TestAuditParser.java |   191 +
 .../hadoop/ozone/audit/parser/package-info.java    |    21 +
 .../hadoop/ozone/fsck/TestContainerMapper.java     |   117 +
 .../org/apache/hadoop/ozone/fsck/package-info.java |    44 +
 .../hadoop/ozone/scm/TestContainerSQLCli.java      |     8 +-
 .../org/apache/hadoop/ozone/scm/package-info.java  |    22 +
 .../tools/src/test/resources/commands.properties   |    22 +
 .../tools/src/test/resources/testaudit.log         |    15 +
 hadoop-project-dist/pom.xml                        |     2 +-
 hadoop-project/pom.xml                             |    66 +-
 hadoop-project/src/site/site.xml                   |     1 +
 .../hadoop-submarine-core}/README.md               |     0
 hadoop-submarine/hadoop-submarine-core/pom.xml     |   144 +
 .../base/ubuntu-16.04/Dockerfile.cpu.tf_1.8.0      |     0
 .../base/ubuntu-16.04/Dockerfile.gpu.tf_1.8.0      |     0
 .../src/main/docker/build-all.sh                   |     0
 .../ubuntu-16.04/Dockerfile.cpu.tf_1.8.0           |     0
 .../ubuntu-16.04/Dockerfile.gpu.tf_1.8.0           |     0
 .../cifar10_estimator_tf_1.8.0/README.md           |     0
 .../cifar10_estimator_tf_1.8.0/cifar10.py          |     0
 .../cifar10_estimator_tf_1.8.0/cifar10_main.py     |     0
 .../cifar10_estimator_tf_1.8.0/cifar10_model.py    |     0
 .../cifar10_estimator_tf_1.8.0/cifar10_utils.py    |     0
 .../generate_cifar10_tfrecords.py                  |     0
 .../cifar10_estimator_tf_1.8.0/model_base.py       |     0
 .../zeppelin-notebook-example/Dockerfile.gpu       |     0
 .../zeppelin-notebook-example/run_container.sh     |     0
 .../docker/zeppelin-notebook-example/shiro.ini     |     0
 .../zeppelin-notebook-example/zeppelin-site.xml    |     0
 .../yarn/submarine/client/cli/AbstractCli.java     |     0
 .../hadoop/yarn/submarine/client/cli/Cli.java      |     0
 .../yarn/submarine/client/cli/CliConstants.java    |     0
 .../hadoop/yarn/submarine/client/cli/CliUtils.java |   124 +
 .../yarn/submarine/client/cli/RunJobCli.java       |     0
 .../yarn/submarine/client/cli/ShowJobCli.java      |     0
 .../submarine/client/cli/param/BaseParameters.java |     0
 .../submarine/client/cli/param/Localization.java   |     0
 .../yarn/submarine/client/cli/param/Quicklink.java |     0
 .../client/cli/param/RunJobParameters.java         |   334 +
 .../submarine/client/cli/param/RunParameters.java  |     0
 .../client/cli/param/ShowJobParameters.java        |     0
 .../submarine/client/cli/param/package-info.java   |     0
 .../yarn/submarine/common/ClientContext.java       |     0
 .../apache/hadoop/yarn/submarine/common/Envs.java  |     0
 .../submarine/common/api/JobComponentStatus.java   |    69 +
 .../hadoop/yarn/submarine/common/api/JobState.java |     0
 .../yarn/submarine/common/api/JobStatus.java       |    87 +
 .../hadoop/yarn/submarine/common/api/TaskType.java |     0
 .../common/conf/SubmarineConfiguration.java        |     0
 .../yarn/submarine/common/conf/SubmarineLogs.java  |     0
 .../common/exception/SubmarineException.java       |     0
 .../exception/SubmarineRuntimeException.java       |     0
 .../common/fs/DefaultRemoteDirectoryManager.java   |     0
 .../common/fs/RemoteDirectoryManager.java          |     0
 .../yarn/submarine/runtimes/RuntimeFactory.java    |   103 +
 .../common/FSBasedSubmarineStorageImpl.java        |     0
 .../yarn/submarine/runtimes/common/JobMonitor.java |     0
 .../submarine/runtimes/common/JobSubmitter.java    |     0
 .../runtimes/common/StorageKeyConstants.java       |     0
 .../runtimes/common/SubmarineStorage.java          |     0
 .../src/site/markdown/DeveloperGuide.md            |     0
 .../src/site/markdown/Examples.md                  |     0
 .../src/site/markdown/HowToInstall.md              |     0
 .../src/site/markdown/Index.md                     |     0
 .../src/site/markdown/InstallationGuide.md         |     0
 .../markdown/InstallationGuideChineseVersion.md    |     0
 .../src/site/markdown/QuickStart.md                |   218 +
 .../markdown/RunningDistributedCifar10TFJobs.md    |     0
 .../src/site/markdown/RunningZeppelinOnYARN.md     |     0
 .../src/site/markdown/TestAndTroubleshooting.md    |     0
 .../src/site/markdown/WriteDockerfile.md           |     0
 .../src/site/resources/css/site.css                |     0
 .../src/site/resources/images/job-logs-ui.png      |   Bin
 .../resources/images/multiple-tensorboard-jobs.png |   Bin
 .../site/resources/images/submarine-installer.gif  |   Bin
 .../site/resources/images/tensorboard-service.png  |   Bin
 .../hadoop-submarine-core}/src/site/site.xml       |     0
 .../submarine/client/cli/TestRunJobCliParsing.java |   201 +
 .../client/cli/TestShowJobCliParsing.java          |     0
 .../yarn/submarine/common/MockClientContext.java   |     0
 .../common/fs/MockRemoteDirectoryManager.java      |     0
 .../runtimes/common/MemorySubmarineStorage.java    |     0
 .../common/TestFSBasedSubmarineStorage.java        |     0
 .../src/test/resources/core-site.xml               |     0
 .../src/test/resources/hdfs-site.xml               |     0
 .../README.md                                      |     0
 .../hadoop-submarine-yarnservice-runtime/pom.xml   |   155 +
 .../yarnservice/YarnServiceJobMonitor.java         |    58 +
 .../yarnservice/YarnServiceJobSubmitter.java       |   908 +
 .../yarnservice/YarnServiceRuntimeFactory.java     |     0
 .../runtimes/yarnservice/YarnServiceUtils.java     |     0
 .../builder/JobComponentStatusBuilder.java         |    44 +
 .../yarnservice/builder/JobStatusBuilder.java      |    63 +
 .../cli/yarnservice/TestYarnServiceRunJobCli.java  |  1220 +
 .../cli/yarnservice/YarnServiceCliTestUtils.java   |     0
 .../yarnservice/TestTFConfigGenerator.java         |     0
 .../src/test/resources/core-site.xml               |     0
 .../src/test/resources/hdfs-site.xml               |     0
 hadoop-submarine/pom.xml                           |    59 +
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java       |     7 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java  |    10 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java    |    31 +-
 .../aliyun/oss/AssumedRoleCredentialProvider.java  |   115 -
 .../org/apache/hadoop/fs/aliyun/oss/Constants.java |    22 -
 .../src/site/markdown/tools/hadoop-aliyun/index.md |    58 +-
 .../fs/aliyun/oss/TestAliyunCredentials.java       |    30 +-
 .../aliyun/oss/TestAliyunOSSBlockOutputStream.java |     1 -
 .../oss/TestAliyunOSSFileSystemContract.java       |    14 +-
 .../aliyun/oss/TestAliyunOSSFileSystemStore.java   |    10 +-
 .../oss/contract/TestAliyunOSSContractDistCp.java  |     1 -
 hadoop-tools/hadoop-archive-logs/pom.xml           |     2 +-
 hadoop-tools/hadoop-archives/pom.xml               |     2 +-
 .../hadoop-aws/dev-support/findbugs-exclude.xml    |     5 +
 hadoop-tools/hadoop-aws/pom.xml                    |     7 +-
 .../hadoop/fs/s3a/AWSCredentialProviderList.java   |    80 +-
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |    51 +-
 .../hadoop/fs/s3a/DefaultS3ClientFactory.java      |    11 +-
 .../hadoop/fs/s3a/InconsistentAmazonS3Client.java  |     3 +-
 .../apache/hadoop/fs/s3a/InternalConstants.java    |    53 +
 .../java/org/apache/hadoop/fs/s3a/Invoker.java     |     2 +-
 .../main/java/org/apache/hadoop/fs/s3a/S3A.java    |    12 +-
 .../apache/hadoop/fs/s3a/S3AEncryptionMethods.java |    43 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java    |   606 +-
 .../org/apache/hadoop/fs/s3a/S3AInputStream.java   |    43 +-
 .../apache/hadoop/fs/s3a/S3AInstrumentation.java   |    31 +-
 .../org/apache/hadoop/fs/s3a/S3AOpContext.java     |    27 +-
 .../org/apache/hadoop/fs/s3a/S3AReadOpContext.java |    97 +-
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java    |   153 +-
 .../org/apache/hadoop/fs/s3a/S3ClientFactory.java  |     6 +-
 .../apache/hadoop/fs/s3a/S3ObjectAttributes.java   |    27 +-
 .../fs/s3a/SimpleAWSCredentialsProvider.java       |    33 +-
 .../java/org/apache/hadoop/fs/s3a/Statistic.java   |     9 +-
 .../fs/s3a/TemporaryAWSCredentialsProvider.java    |    91 +-
 .../apache/hadoop/fs/s3a/WriteOperationHelper.java |   105 +-
 .../fs/s3a/auth/AbstractAWSCredentialProvider.java |    70 +
 .../auth/AbstractSessionCredentialsProvider.java   |   171 +
 .../fs/s3a/auth/AssumedRoleCredentialProvider.java |    41 +-
 .../s3a/auth/IAMInstanceCredentialsProvider.java   |    75 +
 .../fs/s3a/auth/MarshalledCredentialBinding.java   |   208 +
 .../fs/s3a/auth/MarshalledCredentialProvider.java  |    92 +
 .../hadoop/fs/s3a/auth/MarshalledCredentials.java  |   409 +
 .../hadoop/fs/s3a/auth/NoAuthWithAWSException.java |     8 +-
 .../fs/s3a/auth/NoAwsCredentialsException.java     |    69 +
 .../org/apache/hadoop/fs/s3a/auth/RoleModel.java   |    97 +-
 .../apache/hadoop/fs/s3a/auth/RolePolicies.java    |   190 +-
 .../hadoop/fs/s3a/auth/STSClientFactory.java       |   173 +-
 .../fs/s3a/auth/delegation/AWSPolicyProvider.java  |    59 +
 .../fs/s3a/auth/delegation/AbstractDTService.java  |   154 +
 .../delegation/AbstractDelegationTokenBinding.java |   307 +
 .../delegation/AbstractS3ATokenIdentifier.java     |   307 +
 .../s3a/auth/delegation/DelegationConstants.java   |   165 +
 .../delegation/DelegationTokenIOException.java     |    50 +
 .../delegation/EncryptionSecretOperations.java     |    73 +
 .../fs/s3a/auth/delegation/EncryptionSecrets.java  |   221 +
 .../delegation/FullCredentialsTokenBinding.java    |   172 +
 .../delegation/FullCredentialsTokenIdentifier.java |    50 +
 .../fs/s3a/auth/delegation/RoleTokenBinding.java   |   176 +
 .../s3a/auth/delegation/RoleTokenIdentifier.java   |    49 +
 .../s3a/auth/delegation/S3ADelegationTokens.java   |   686 +
 .../fs/s3a/auth/delegation/S3ADtFetcher.java       |    80 +
 .../s3a/auth/delegation/SessionTokenBinding.java   |   424 +
 .../auth/delegation/SessionTokenIdentifier.java    |   148 +
 .../fs/s3a/auth/delegation/package-info.java       |    34 +
 .../apache/hadoop/fs/s3a/auth/package-info.java    |     6 +-
 .../apache/hadoop/fs/s3a/commit/DurationInfo.java  |    39 +-
 .../hadoop/fs/s3a/s3guard/DirListingMetadata.java  |     6 +-
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java      |    37 +-
 .../org/apache/hadoop/fs/s3a/s3guard/S3Guard.java  |     2 +
 .../apache/hadoop/fs/s3a/s3guard/S3GuardTool.java  |   111 +-
 .../fs/s3a/select/InternalSelectConstants.java     |    77 +
 .../apache/hadoop/fs/s3a/select/SelectBinding.java |   431 +
 .../hadoop/fs/s3a/select/SelectConstants.java      |   296 +
 .../hadoop/fs/s3a/select/SelectInputStream.java    |   457 +
 .../apache/hadoop/fs/s3a/select/SelectTool.java    |   355 +
 .../apache/hadoop/fs/s3a/select/package-info.java  |    27 +
 .../apache/hadoop/fs/s3native/S3xLoginHelper.java  |     2 -
 .../org.apache.hadoop.security.token.DtFetcher     |    18 +
 ...rg.apache.hadoop.security.token.TokenIdentifier |    20 +
 .../markdown/tools/hadoop-aws/assumed_roles.md     |   289 +-
 .../hadoop-aws/delegation_token_architecture.md    |   466 +
 .../markdown/tools/hadoop-aws/delegation_tokens.md |   870 +
 .../src/site/markdown/tools/hadoop-aws/index.md    |    87 +-
 .../site/markdown/tools/hadoop-aws/s3_select.md    |  1100 +
 .../src/site/markdown/tools/hadoop-aws/s3guard.md  |    14 +
 .../src/site/markdown/tools/hadoop-aws/testing.md  |    85 +-
 .../tools/hadoop-aws/troubleshooting_s3a.md        |    18 +-
 .../fs/contract/s3a/ITestS3AContractDistCp.java    |    33 +
 .../apache/hadoop/fs/s3a/AbstractS3ATestBase.java  |     6 +-
 .../fs/s3a/ITestS3AAWSCredentialsProvider.java     |     4 +-
 .../ITestS3AEncryptionSSECBlockOutputStream.java   |    45 -
 ...ptionSSEKMSUserDefinedKeyBlockOutputStream.java |    50 -
 .../ITestS3AEncryptionSSES3BlockOutputStream.java  |    44 -
 .../hadoop/fs/s3a/ITestS3AFailureHandling.java     |    11 +-
 .../fs/s3a/ITestS3ATemporaryCredentials.java       |   364 +-
 .../apache/hadoop/fs/s3a/MockS3AFileSystem.java    |    19 +-
 .../apache/hadoop/fs/s3a/MockS3ClientFactory.java  |     5 +-
 .../org/apache/hadoop/fs/s3a/S3ATestConstants.java |    20 +
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java     |   311 +-
 .../fs/s3a/TestS3AAWSCredentialsProvider.java      |   221 +-
 .../apache/hadoop/fs/s3a/TestS3AGetFileStatus.java |    33 +-
 .../apache/hadoop/fs/s3a/TestSSEConfiguration.java |    25 +
 .../apache/hadoop/fs/s3a/auth/ITestAssumeRole.java |    55 +-
 .../s3a/auth/ITestAssumedRoleCommitOperations.java |     2 +-
 .../apache/hadoop/fs/s3a/auth/RoleTestUtils.java   |    41 +-
 .../fs/s3a/auth/TestMarshalledCredentials.java     |   138 +
 .../s3a/auth/delegation/AbstractDelegationIT.java  |   207 +
 .../auth/delegation/CountInvocationsProvider.java  |    52 +
 .../hadoop/fs/s3a/auth/delegation/Csvout.java      |   103 +
 .../auth/delegation/ILoadTestRoleCredentials.java  |    38 +
 .../delegation/ILoadTestSessionCredentials.java    |   295 +
 .../s3a/auth/delegation/ITestDelegatedMRJob.java   |   272 +
 .../delegation/ITestRoleDelegationInFileystem.java |    68 +
 .../auth/delegation/ITestRoleDelegationTokens.java |   122 +
 .../ITestSessionDelegationInFileystem.java         |   727 +
 .../delegation/ITestSessionDelegationTokens.java   |   282 +
 .../delegation/MiniKerberizedHadoopCluster.java    |   378 +
 .../delegation/TestS3ADelegationTokenSupport.java  |   171 +
 .../hadoop/fs/s3a/commit/AbstractCommitITest.java  |     2 +-
 .../fs/s3a/commit/AbstractITCommitMRJob.java       |    17 +-
 .../fs/s3a/commit/staging/StagingTestBase.java     |    30 +-
 .../s3a/commit/staging/TestStagingCommitter.java   |     6 +-
 .../TestStagingDirectoryOutputCommitter.java       |    22 +-
 .../staging/TestStagingPartitionedFileListing.java |     4 +-
 .../staging/TestStagingPartitionedJobCommit.java   |     4 +-
 .../staging/TestStagingPartitionedTaskCommit.java  |    24 +-
 .../fileContext/ITestS3AFileContextStatistics.java |    20 +-
 .../s3a/s3guard/AbstractS3GuardToolTestBase.java   |   191 +-
 .../fs/s3a/s3guard/ITestDynamoDBMetadataStore.java |     6 +
 .../fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java   |     2 +-
 .../fs/s3a/s3guard/ITestS3GuardToolLocal.java      |    16 +-
 .../fs/s3a/s3guard/S3GuardToolTestHelper.java      |    89 +
 .../apache/hadoop/fs/s3a/scale/NanoTimerStats.java |   192 +
 .../hadoop/fs/s3a/select/AbstractS3SelectTest.java |   746 +
 .../org/apache/hadoop/fs/s3a/select/CsvFile.java   |   138 +
 .../apache/hadoop/fs/s3a/select/ITestS3Select.java |   967 +
 .../hadoop/fs/s3a/select/ITestS3SelectCLI.java     |   347 +
 .../hadoop/fs/s3a/select/ITestS3SelectLandsat.java |   432 +
 .../hadoop/fs/s3a/select/ITestS3SelectMRJob.java   |   206 +
 .../fs/s3a/yarn/ITestS3AMiniYarnCluster.java       |    38 +-
 .../java/org/apache/hadoop/mapreduce/MockJob.java  |   116 +
 hadoop-tools/hadoop-azure/pom.xml                  |     2 +-
 .../fs/azure/AzureNativeFileSystemStore.java       |     5 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java      |    10 +
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java    |    37 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java      |   205 +-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |     3 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |    23 +-
 .../fs/azurebfs/oauth2/IdentityTransformer.java    |   278 +
 .../fs/azurebfs/services/AbfsInputStream.java      |     3 +-
 .../fs/azurebfs/services/AbfsRestOperation.java    |     6 +
 .../src/site/markdown/testing_azure.md             |    55 +
 .../fs/azure/AzureBlobStorageTestAccount.java      |     3 +
 .../hadoop/fs/azure/ITestContainerChecks.java      |     6 -
 .../hadoop/fs/azure/ITestWasbRemoteCallHelper.java |    69 +-
 .../contract/NativeAzureFileSystemContract.java    |     1 +
 .../fs/azure/integration/AzureTestUtils.java       |     9 +
 .../ITestAzureFileSystemInstrumentation.java       |    57 +-
 .../apache/hadoop/fs/azurebfs/ITestAbfsClient.java |    31 +-
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  |   301 +
 .../fs/azurebfs/ITestAzureBlobFileSystemE2E.java   |    37 +-
 .../fs/azurebfs/services/TestAbfsClient.java       |    16 +-
 .../hadoop-azure/src/test/resources/azure-test.xml |     5 +
 .../hadoop-azure/src/test/resources/wasb.xml       |     7 +-
 hadoop-tools/hadoop-datajoin/pom.xml               |     2 +-
 hadoop-tools/hadoop-distcp/pom.xml                 |     2 +-
 .../apache/hadoop/tools/CopyListingFileStatus.java |    14 +-
 .../org/apache/hadoop/tools/DistCpConstants.java   |     7 +-
 .../org/apache/hadoop/tools/DistCpContext.java     |     4 +
 .../apache/hadoop/tools/DistCpOptionSwitch.java    |    16 +-
 .../org/apache/hadoop/tools/DistCpOptions.java     |    19 +
 .../org/apache/hadoop/tools/OptionsParser.java     |     4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |     6 +-
 .../tools/mapred/RetriableFileCopyCommand.java     |    52 +-
 .../org/apache/hadoop/tools/util/DistCpUtils.java  |     1 +
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |     6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |    34 +-
 .../tools/contract/AbstractContractDistCpTest.java |    68 +-
 .../apache/hadoop/tools/util/TestDistCpUtils.java  |    88 +-
 hadoop-tools/hadoop-extras/pom.xml                 |     2 +-
 hadoop-tools/hadoop-fs2img/pom.xml                 |     2 +-
 .../namenode/ITestProvidedImplementation.java      |     2 +-
 hadoop-tools/hadoop-gridmix/pom.xml                |     2 +-
 .../hadoop/mapred/gridmix/TestGridMixClasses.java  |     8 +-
 hadoop-tools/hadoop-kafka/pom.xml                  |     2 +-
 .../hadoop/metrics2/impl/TestKafkaMetrics.java     |     4 +-
 .../src/main/html/simulate.html.template           |    16 +-
 .../java/org/apache/hadoop/yarn/sls/SLSRunner.java |    11 +-
 .../hadoop/yarn/sls/appmaster/AMSimulator.java     |    10 +-
 .../hadoop/yarn/sls/appmaster/MRAMSimulator.java   |     9 +-
 .../yarn/sls/appmaster/StreamAMSimulator.java      |     5 +-
 .../yarn/sls/resourcemanager/MockAMLauncher.java   |    54 +-
 .../hadoop/yarn/sls/appmaster/TestAMSimulator.java |    10 +-
 hadoop-tools/hadoop-streaming/pom.xml              |     2 +-
 .../streaming/mapreduce/StreamInputFormat.java     |    14 +-
 .../hadoop-yarn/conf/container-executor.cfg        |     8 +-
 .../jdiff/Apache_Hadoop_YARN_Client_3.1.2.xml      |  2947 ++
 .../jdiff/Apache_Hadoop_YARN_Common_3.1.2.xml      |  3566 ++
 .../yarn/api/ContainerManagementProtocol.java      |    19 +
 .../apache/hadoop/yarn/api/CsiAdaptorPlugin.java   |    50 +
 .../apache/hadoop/yarn/api/CsiAdaptorProtocol.java |    44 +
 .../GetLocalizationStatusesRequest.java            |    69 +
 .../GetLocalizationStatusesResponse.java           |    87 +
 .../protocolrecords/NodePublishVolumeRequest.java  |    94 +
 .../protocolrecords/NodePublishVolumeResponse.java |    31 +
 .../NodeUnpublishVolumeRequest.java                |    44 +
 .../NodeUnpublishVolumeResponse.java               |    31 +
 .../hadoop/yarn/api/records/ApplicationReport.java |    49 +-
 .../hadoop/yarn/api/records/LocalizationState.java |    36 +
 .../yarn/api/records/LocalizationStatus.java       |    95 +
 .../apache/hadoop/yarn/api/records/NodeState.java  |    12 +
 .../yarn/api/records/ResourceInformation.java      |     4 +-
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |    73 +-
 .../util/constraint/PlacementConstraintParser.java |    63 +-
 .../hadoop/yarn/util/csi/CsiConfigUtils.java       |    79 +
 .../apache/hadoop/yarn/util/csi/package-info.java  |    21 +
 .../hadoop/yarn/util/resource/ResourceUtils.java   |   136 +
 .../src/main/proto/YarnCsiAdaptor.proto            |     6 +
 .../main/proto/containermanagement_protocol.proto  |     4 +
 .../src/main/proto/yarn_csi_adaptor.proto          |    24 +
 .../src/main/proto/yarn_protos.proto               |     1 +
 .../src/main/proto/yarn_service_protos.proto       |    28 +
 .../resource/TestPlacementConstraintParser.java    |    79 +-
 .../pom.xml                                        |     3 +-
 .../distributedshell/ApplicationMaster.java        |     9 +-
 .../yarn/applications/distributedshell/Client.java |    16 +
 .../distributedshell/TestDSAppMaster.java          |     5 +-
 .../TestDSWithMultipleNodeManager.java             |   361 +
 .../distributedshell/TestDistributedShell.java     |   119 +-
 .../TestDistributedShellWithNodeLabels.java        |   165 -
 .../hadoop-yarn-services-api/pom.xml               |     2 +-
 .../hadoop-yarn-services-core/pom.xml              |     2 +-
 .../hadoop/yarn/service/ServiceScheduler.java      |    87 +-
 .../hadoop/yarn/service/api/records/Container.java |    30 +
 .../service/api/records/LocalizationStatus.java    |   132 +
 .../service/api/records/ResourceInformation.java   |    17 +
 .../hadoop/yarn/service/component/Component.java   |    10 +-
 .../yarn/service/component/NeverRestartPolicy.java |    11 +-
 .../service/component/OnFailureRestartPolicy.java  |    12 +-
 .../component/instance/ComponentInstance.java      |   151 +-
 .../yarn/service/conf/YarnServiceConstants.java    |     2 +
 .../containerlaunch/ContainerLaunchService.java    |    28 +-
 .../service/provider/AbstractProviderService.java  |    12 +-
 .../yarn/service/provider/ProviderService.java     |    40 +-
 .../yarn/service/provider/ProviderUtils.java       |    19 +-
 .../yarn/service/MockRunningServiceContext.java    |    45 +-
 .../apache/hadoop/yarn/service/MockServiceAM.java  |     8 +-
 .../hadoop/yarn/service/ServiceTestUtils.java      |    28 +-
 .../apache/hadoop/yarn/service/TestServiceAM.java  |     7 +-
 .../yarn/service/client/TestServiceClient.java     |    12 +-
 .../yarn/service/component/TestComponent.java      |    57 +
 .../component/TestComponentRestartPolicy.java      |     4 +-
 .../component/instance/TestComponentInstance.java  |    72 +-
 .../yarn/service/conf/TestAppJsonResolve.java      |     1 +
 .../yarn/service/monitor/TestServiceMonitor.java   |    15 +-
 .../yarn/service/provider/TestProviderUtils.java   |    11 +-
 .../providers/TestAbstractClientProvider.java      |     4 +-
 .../providers/TestDefaultClientProvider.java       |     4 +-
 .../yarn/service/conf/examples/external3.json      |     1 +
 .../hadoop-yarn-submarine/pom.xml                  |   213 -
 .../hadoop/yarn/submarine/client/cli/CliUtils.java |   213 -
 .../client/cli/param/RunJobParameters.java         |   333 -
 .../submarine/common/api/JobComponentStatus.java   |    73 -
 .../yarn/submarine/common/api/JobStatus.java       |    87 -
 .../api/builder/JobComponentStatusBuilder.java     |    44 -
 .../common/api/builder/JobStatusBuilder.java       |    64 -
 .../yarn/submarine/runtimes/RuntimeFactory.java    |   106 -
 .../yarnservice/YarnServiceJobMonitor.java         |    58 -
 .../yarnservice/YarnServiceJobSubmitter.java       |   903 -
 .../src/site/markdown/QuickStart.md                |   200 -
 .../submarine/client/cli/TestRunJobCliParsing.java |   275 -
 .../cli/yarnservice/TestYarnServiceRunJobCli.java  |  1220 -
 .../hadoop-yarn/hadoop-yarn-applications/pom.xml   |     1 -
 .../hadoop-yarn/hadoop-yarn-client/pom.xml         |     2 +-
 .../apache/hadoop/yarn/client/api/NMClient.java    |    34 +
 .../yarn/client/api/impl/AHSv2ClientImpl.java      |    12 +-
 .../hadoop/yarn/client/api/impl/NMClientImpl.java  |    56 +
 .../yarn/client/api/impl/YarnClientImpl.java       |     2 +-
 .../apache/hadoop/yarn/client/cli/RMAdminCLI.java  |     6 +-
 .../org/apache/hadoop/yarn/client/cli/YarnCLI.java |     6 +-
 .../client/api/async/impl/TestAMRMClientAsync.java |     8 +-
 .../client/api/async/impl/TestNMClientAsync.java   |     4 +-
 .../hadoop/yarn/client/api/impl/TestAHSClient.java |     6 +-
 .../yarn/client/api/impl/TestAHSv2ClientImpl.java  |     9 +
 .../yarn/client/api/impl/TestAMRMClient.java       |     2 +-
 .../client/api/impl/TestSharedCacheClientImpl.java |     2 +-
 .../yarn/client/api/impl/TestYarnClient.java       |    52 +-
 .../yarn/client/api/impl/TestYarnClientImpl.java   |     7 +-
 .../apache/hadoop/yarn/client/cli/TestLogsCLI.java |    10 +-
 .../yarn/client/cli/TestNodeAttributesCLI.java     |     2 +-
 .../hadoop/yarn/client/cli/TestRMAdminCLI.java     |    17 +-
 .../apache/hadoop/yarn/client/cli/TestYarnCLI.java |     6 +-
 .../hadoop-yarn/hadoop-yarn-common/pom.xml         |    12 +-
 .../ContainerManagementProtocolPBClientImpl.java   |    22 +
 .../pb/client/CsiAdaptorProtocolPBClientImpl.java  |    36 +
 .../ContainerManagementProtocolPBServiceImpl.java  |    20 +
 .../service/CsiAdaptorProtocolPBServiceImpl.java   |    36 +
 .../pb/GetLocalizationStatusesRequestPBImpl.java   |   156 +
 .../pb/GetLocalizationStatusesResponsePBImpl.java  |   260 +
 .../impl/pb/NodePublishVolumeRequestPBImpl.java    |   201 +
 .../impl/pb/NodePublishVolumeResponsePBImpl.java   |    62 +
 .../impl/pb/NodeUnpublishVolumeRequestPBImpl.java  |    89 +
 .../impl/pb/NodeUnpublishVolumeResponsePBImpl.java |    61 +
 .../records/impl/pb/ApplicationReportPBImpl.java   |    12 +
 .../records/impl/pb/LocalizationStatusPBImpl.java  |   192 +
 .../yarn/api/records/impl/pb/ProtoUtils.java       |    38 +
 .../metrics/ApplicationMetricsConstants.java       |     3 +
 .../util/resource/DominantResourceCalculator.java  |    32 +-
 .../hadoop/yarn/util/resource/Resources.java       |    36 +-
 .../util/timeline/TimelineEntityV2Converter.java   |    18 +-
 .../resources/webapps/static/yarn.dt.plugins.js    |    23 +
 .../src/main/resources/yarn-default.xml            |    58 +-
 .../apache/hadoop/yarn/TestContainerLaunchRPC.java |     9 +
 .../yarn/TestContainerResourceIncreaseRPC.java     |     9 +
 .../yarn/client/api/impl/TestTimelineClient.java   |     8 +-
 .../api/impl/TestTimelineClientForATS1_5.java      |     2 +-
 .../impl/pb/TestRpcClientFactoryPBImpl.java        |     2 +-
 .../impl/pb/TestRpcServerFactoryPBImpl.java        |     2 +-
 .../yarn/util/resource/TestResourceUtils.java      |   124 +
 .../hadoop/yarn/util/resource/TestResources.java   |     8 +-
 .../resources/resource-types/resource-types-6.xml  |    58 +
 .../hadoop-yarn/hadoop-yarn-csi/pom.xml            |    18 +
 .../hadoop/yarn/csi/adaptor/CsiAdaptorFactory.java |    73 +
 .../csi/adaptor/CsiAdaptorProtocolService.java     |    60 +-
 .../yarn/csi/adaptor/CsiAdaptorServices.java       |   108 +
 .../yarn/csi/adaptor/DefaultCsiAdaptorImpl.java    |   132 +
 .../apache/hadoop/yarn/csi/client/CsiClient.java   |     6 +
 .../hadoop/yarn/csi/client/CsiClientImpl.java      |    20 +
 .../GetPluginInfoResponseProtoTranslator.java      |     2 +-
 .../NodePublishVolumeRequestProtoTranslator.java   |    77 +
 .../NodeUnpublishVolumeRequestProtoTranslator.java |    49 +
 .../csi/translator/ProtoTranslatorFactory.java     |    12 +
 .../apache/hadoop/yarn/csi/utils/ConfigUtils.java  |    61 -
 .../hadoop/yarn/csi/adaptor/MockCsiAdaptor.java    |    85 +
 .../yarn/csi/adaptor/TestCsiAdaptorService.java    |   318 +-
 .../csi/adaptor/TestNodePublishVolumeRequest.java  |    55 +
 .../hadoop/yarn/csi/client/ICsiClientTest.java     |    53 +
 .../pom.xml                                        |     2 +-
 .../ApplicationHistoryManagerImpl.java             |     6 +-
 .../ApplicationHistoryManagerOnTimelineStore.java  |    20 +-
 ...stApplicationHistoryManagerOnTimelineStore.java |     1 +
 .../webapp/TestAHSWebServices.java                 |     1 +
 .../timeline/webapp/TestTimelineWebServices.java   |     2 +-
 .../hadoop-yarn-server-common/pom.xml              |     2 +-
 .../api/protocolrecords/NodeHeartbeatRequest.java  |     4 +
 .../api/protocolrecords/NodeHeartbeatResponse.java |    21 +-
 .../impl/pb/NodeHeartbeatRequestPBImpl.java        |    13 +
 .../impl/pb/NodeHeartbeatResponsePBImpl.java       |    91 +-
 .../yarn/server/utils/YarnServerBuilderUtils.java  |    68 +
 .../hadoop/yarn/server/webapp/ContainerBlock.java  |    66 +-
 .../hadoop/yarn/server/webapp/WebPageUtils.java    |     5 +-
 .../hadoop/yarn/server/webapp/dao/AppInfo.java     |     4 +-
 .../yarn/server/webapp/dao/ContainerInfo.java      |     4 +
 .../proto/yarn_server_common_service_protos.proto  |     2 +
 .../test/java/org/apache/hadoop/yarn/TestRPC.java  |     9 +
 .../hadoop/yarn/TestYarnServerApiClasses.java      |    55 +-
 .../api/protocolrecords/TestProtocolRecords.java   |    21 +-
 .../utils/FederationPoliciesTestUtil.java          |     2 +-
 .../yarn/server/webapp/ContainerBlockTest.java     |    93 +
 .../hadoop-yarn-server-nodemanager/pom.xml         |     6 +-
 .../src/CMakeLists.txt                             |     2 +
 .../server/nodemanager/LinuxContainerExecutor.java |     4 +-
 .../server/nodemanager/NodeStatusUpdaterImpl.java  |    17 +-
 .../nodemanager/api/deviceplugin/Device.java       |     4 +-
 .../nodemanager/containermanager/AuxServices.java  |   121 +-
 .../containermanager/ContainerManagerImpl.java     |    53 +
 .../containermanager/container/Container.java      |    11 +
 .../containermanager/container/ContainerImpl.java  |    56 +-
 .../launcher/ContainerCleanup.java                 |    21 +-
 .../containermanager/launcher/ContainerLaunch.java |    32 +
 .../linux/privileged/PrivilegedOperation.java      |     1 +
 .../linux/resources/gpu/GpuResourceAllocator.java  |     5 +-
 .../linux/runtime/DockerLinuxContainerRuntime.java |    93 +-
 .../linux/runtime/docker/DockerRunCommand.java     |     5 +
 .../localizer/ContainerLocalizer.java              |     1 +
 .../containermanager/localizer/ResourceSet.java    |    45 +-
 .../com/nvidia/NvidiaGPUPluginForRuntimeV2.java    |   240 +
 .../resourceplugin/com/nvidia/package-info.java    |    19 +
 .../deviceframework/DeviceMappingManager.java      |    60 +-
 .../deviceframework/DevicePluginAdapter.java       |    25 +-
 .../DeviceResourceDockerRuntimePluginImpl.java     |   233 +
 .../deviceframework/DeviceResourceHandlerImpl.java |   214 +-
 .../deviceframework/ShellWrapper.java              |    46 +
 .../gpu/GpuDockerCommandPluginFactory.java         |     4 +
 .../gpu/NvidiaDockerV2CommandPlugin.java           |   111 +
 .../scheduler/ContainerScheduler.java              |    14 +-
 .../volume/csi/ContainerVolumePublisher.java       |   207 +
 .../containermanager/volume/csi/package-info.java  |    22 +
 .../executor/ContainerStartContext.java            |    12 +
 .../nodemanager/metrics/NodeManagerMetrics.java    |    21 +
 .../timelineservice/NMTimelinePublisher.java       |   276 +-
 .../server/nodemanager/webapp/NMWebServices.java   |    31 +
 .../src/main/native/container-executor/impl/main.c |     6 +
 .../impl/modules/cgroups/cgroups-operations.c      |     2 +-
 .../impl/modules/devices/devices-module.c          |   281 +
 .../impl/modules/devices/devices-module.h          |    45 +
 .../src/main/native/container-executor/impl/util.c |     3 +
 .../container-executor/impl/utils/docker-util.c    |    33 +
 .../container-executor/impl/utils/docker-util.h    |     3 +-
 .../test/modules/devices/test-devices-module.cc    |   298 +
 .../test/utils/test_docker_util.cc                 |   103 +
 .../native/oom-listener/impl/oom_listener_main.c   |     4 +-
 .../oom-listener/test/oom_listener_test_main.cc    |    14 +-
 .../nodemanager/TestDefaultContainerExecutor.java  |     9 +-
 .../nodemanager/TestLinuxContainerExecutor.java    |     4 +-
 .../TestLinuxContainerExecutorWithMocks.java       |    17 +-
 .../server/nodemanager/TestNodeManagerReboot.java  |     2 +-
 .../server/nodemanager/TestNodeStatusUpdater.java  |    32 +-
 .../containermanager/BaseContainerManagerTest.java |     8 +-
 .../containermanager/TestAuxServices.java          |    66 +-
 .../containermanager/TestContainerManager.java     |   128 +
 .../TestContainerManagerRecovery.java              |     2 +-
 .../application/TestApplication.java               |    11 +-
 .../containermanager/container/TestContainer.java  |    65 +-
 .../task/DockerContainerDeletionMatcher.java       |     5 +-
 .../deletion/task/FileDeletionMatcher.java         |     5 +-
 .../launcher/TestContainerLaunch.java              |     2 +-
 .../launcher/TestContainerRelaunch.java            |     4 +-
 .../launcher/TestContainersLauncher.java           |     8 +-
 .../privileged/MockPrivilegedOperationCaptor.java  |    10 +-
 .../TestCGroupElasticMemoryController.java         |     2 +-
 .../linux/resources/TestCGroupsHandlerImpl.java    |     2 +-
 .../linux/resources/TestDefaultOOMHandler.java     |     2 +-
 .../TestNetworkPacketTaggingHandlerImpl.java       |     4 +-
 .../TestTrafficControlBandwidthHandlerImpl.java    |     2 +-
 .../linux/resources/TestTrafficController.java     |     4 +-
 .../resources/fpga/TestFpgaResourceHandler.java    |     5 +-
 .../resources/gpu/TestGpuResourceHandler.java      |     8 +-
 .../resources/numa/TestNumaResourceAllocator.java  |     6 +-
 .../numa/TestNumaResourceHandlerImpl.java          |     6 +-
 .../linux/runtime/TestDockerContainerRuntime.java  |   181 +-
 .../runtime/docker/TestDockerCommandExecutor.java  |     6 +-
 .../linux/runtime/docker/TestDockerRunCommand.java |     5 +-
 .../localizer/TestContainerLocalizer.java          |    27 +-
 .../localizer/TestLocalResourcesTrackerImpl.java   |     6 +-
 .../localizer/TestLocalizedResource.java           |    22 +-
 .../localizer/TestResourceLocalizationService.java |    75 +-
 .../localizer/TestResourceSet.java                 |   106 +
 .../sharedcache/TestSharedCacheUploader.java       |     4 +-
 .../logaggregation/TestAppLogAggregatorImpl.java   |     2 +-
 .../logaggregation/TestLogAggregationService.java  |     9 +-
 .../loghandler/TestNonAggregatingLogHandler.java   |    16 +-
 .../resourceplugin/TestResourcePluginManager.java  |     2 +-
 .../deviceframework/TestDeviceMappingManager.java  |    53 +-
 .../deviceframework/TestDevicePluginAdapter.java   |   420 +-
 .../resourceplugin/fpga/TestFpgaDiscoverer.java    |     4 +-
 .../gpu/TestNvidiaDockerV2CommandPlugin.java       |   130 +
 .../nvidia/com/TestNvidiaGpuPlugin.java            |   108 +
 .../scheduler/TestContainerSchedulerQueuing.java   |    32 +-
 .../scheduler/TestContainerSchedulerRecovery.java  |     4 +-
 .../TestConfigurationNodeAttributesProvider.java   |     7 +-
 .../recovery/TestNMLeveldbStateStoreService.java   |     2 +-
 .../timelineservice/TestNMTimelinePublisher.java   |     2 +
 .../server/nodemanager/webapp/MockContainer.java   |    16 +
 .../webapp/TestNMWebServicesAuxServices.java       |    44 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml     |     6 +-
 .../server/resourcemanager/NodesListManager.java   |     6 +-
 .../OpportunisticContainerAllocatorAMService.java  |     8 +-
 .../resourcemanager/RMActiveServiceContext.java    |    29 +-
 .../yarn/server/resourcemanager/RMAppManager.java  |     1 +
 .../yarn/server/resourcemanager/RMContext.java     |    11 +-
 .../yarn/server/resourcemanager/RMContextImpl.java |    15 +-
 .../yarn/server/resourcemanager/RMServerUtils.java |    22 +-
 .../resourcemanager/ResourceTrackerService.java    |    33 +-
 .../metrics/AbstractSystemMetricsPublisher.java    |     5 +
 .../metrics/CombinedSystemMetricsPublisher.java    |     6 +
 .../metrics/NoOpSystemMetricPublisher.java         |     4 +
 .../metrics/SystemMetricsPublisher.java            |     2 +
 .../metrics/TimelineServiceV2Publisher.java        |    14 +
 .../AbstractPreemptableResourceCalculator.java     |     2 +-
 .../monitor/capacity/TempQueuePerPartition.java    |     2 +-
 .../server/resourcemanager/rmapp/RMAppImpl.java    |     4 +-
 .../rmapp/attempt/RMAppAttemptImpl.java            |    14 +-
 .../scheduler/AbstractYarnScheduler.java           |    39 +-
 .../scheduler/ClusterNodeTracker.java              |     2 +-
 .../resourcemanager/scheduler/SchedulerUtils.java  |    14 +-
 .../scheduler/capacity/AbstractCSQueue.java        |    55 +-
 .../scheduler/capacity/CSQueue.java                |     4 +
 .../capacity/CapacitySchedulerConfiguration.java   |    89 +-
 .../scheduler/capacity/ParentQueue.java            |     4 +-
 .../scheduler/fair/ConfigurableResource.java       |     5 +-
 .../scheduler/fair/FairSchedulerConfiguration.java |     2 +-
 .../scheduler/fair/policies/ComputeFairShares.java |    62 +-
 .../policies/DominantResourceFairnessPolicy.java   |     2 +-
 .../security/DelegationTokenRenewer.java           |    17 +-
 .../volume/csi/lifecycle/VolumeImpl.java           |     2 +
 .../webapp/JAXBContextResolver.java                |     3 +-
 .../server/resourcemanager/webapp/NodesPage.java   |     5 +-
 .../server/resourcemanager/webapp/RMWSConsts.java  |     3 +
 .../webapp/RMWebServiceProtocol.java               |    15 +
 .../resourcemanager/webapp/RMWebServices.java      |    75 +-
 .../resourcemanager/webapp/dao/NodeInfo.java       |     9 +
 .../webapp/dao/ResourceOptionInfo.java             |    65 +
 .../resourcemanager/webapp/dao/package-info.java   |    27 +
 .../hadoop/yarn/server/resourcemanager/MockAM.java |    27 +
 .../hadoop/yarn/server/resourcemanager/MockNM.java |     3 +
 .../yarn/server/resourcemanager/MockNodes.java     |    47 +-
 .../yarn/server/resourcemanager/NodeManager.java   |     9 +
 .../resourcemanager/TestAMAuthorization.java       |     9 +
 .../server/resourcemanager/TestAppManager.java     |    10 +-
 .../TestAppManagerWithFairScheduler.java           |     4 +-
 .../resourcemanager/TestApplicationACLs.java       |     2 +-
 .../TestApplicationMasterLauncher.java             |    11 +-
 .../resourcemanager/TestClientRMService.java       |    35 +-
 .../server/resourcemanager/TestClientRMTokens.java |     6 +-
 ...stOpportunisticContainerAllocatorAMService.java |    78 +
 .../hadoop/yarn/server/resourcemanager/TestRM.java |    14 +-
 .../resourcemanager/TestRMEmbeddedElector.java     |     2 +-
 .../resourcemanager/TestRMNodeTransitions.java     |    17 +-
 .../yarn/server/resourcemanager/TestRMRestart.java |     8 +-
 .../server/resourcemanager/TestRMServerUtils.java  |    38 +
 .../TestResourceTrackerService.java                |   166 +-
 .../TestWorkPreservingRMRestart.java               |     2 +-
 .../applicationsmanager/MockAsm.java               |    12 +-
 .../TestRMAppLogAggregationStatus.java             |     2 +-
 .../metrics/TestSystemMetricsPublisherForV2.java   |     4 +-
 ...ionalCapacityPreemptionPolicyMockFramework.java |    14 +-
 .../TestPreemptionForQueueWithPriorities.java      |     2 +-
 .../TestProportionalCapacityPreemptionPolicy.java  |    19 +-
 ...lCapacityPreemptionPolicyForNodePartitions.java |     2 +-
 ...acityPreemptionPolicyForReservedContainers.java |     2 +-
 ...lCapacityPreemptionPolicyInterQueueWithDRF.java |     2 +-
 ...ortionalCapacityPreemptionPolicyIntraQueue.java |     2 +-
 ...cityPreemptionPolicyIntraQueueFairOrdering.java |     2 +-
 ...apacityPreemptionPolicyIntraQueueUserLimit.java |     2 +-
 ...lCapacityPreemptionPolicyIntraQueueWithDRF.java |     2 +-
 ...alCapacityPreemptionPolicyPreemptToBalance.java |     2 +-
 .../TestFileSystemNodeAttributeStore.java          |     2 +-
 .../reservation/ReservationSystemTestUtil.java     |     8 +-
 .../TestCapacitySchedulerPlanFollower.java         |    12 +-
 .../reservation/TestFairSchedulerPlanFollower.java |    10 +-
 .../reservation/TestReservationInputValidator.java |     2 +-
 .../planning/TestSimpleCapacityReplanner.java      |     2 +-
 .../rmapp/TestNodesListManager.java                |    11 +-
 .../rmapp/TestRMAppTransitions.java                |     9 +-
 .../rmapp/attempt/TestRMAppAttemptTransitions.java |   106 +-
 .../rmcontainer/TestRMContainerImpl.java           |    16 +-
 .../TestConfigurationMutationACLPolicies.java      |     4 +-
 .../scheduler/TestSchedulerUtils.java              |     4 +-
 .../scheduler/capacity/TestApplicationLimits.java  |    14 +-
 .../capacity/TestApplicationLimitsByPartition.java |    13 +-
 .../capacity/TestCSAllocateCustomResource.java     |   193 +
 .../scheduler/capacity/TestCapacityScheduler.java  |   278 +-
 .../scheduler/capacity/TestChildQueueOrder.java    |     9 +-
 .../scheduler/capacity/TestLeafQueue.java          |    11 +-
 .../scheduler/capacity/TestParentQueue.java        |     4 +-
 .../scheduler/capacity/TestQueueState.java         |     3 +-
 .../scheduler/capacity/TestQueueStateManager.java  |     2 +-
 .../scheduler/capacity/TestReservations.java       |     2 +-
 .../TestSchedulingRequestContainerAllocation.java  |   228 +-
 ...tSchedulingRequestContainerAllocationAsync.java |    53 +-
 .../scheduler/capacity/TestUtils.java              |     4 +-
 .../scheduler/fair/FakeSchedulable.java            |    17 +-
 .../scheduler/fair/TestComputeFairShares.java      |   104 +-
 .../scheduler/fair/TestContinuousScheduling.java   |     2 +-
 .../scheduler/fair/TestFSSchedulerNode.java        |     2 +-
 .../TestSingleConstraintAppPlacementAllocator.java |     4 +-
 .../security/TestClientToAMTokens.java             |     2 +-
 .../security/TestDelegationTokenRenewer.java       |   112 +-
 .../security/TestProxyCAManager.java               |     2 +-
 .../security/TestRMAuthenticationFilter.java       |     3 +-
 .../volume/csi/TestVolumeProcessor.java            |    76 +-
 .../webapp/TestApplicationsRequestBuilder.java     |     4 +-
 .../resourcemanager/webapp/TestRMWebApp.java       |     2 +-
 .../resourcemanager/webapp/TestRMWebServices.java  |     2 +-
 .../webapp/TestRMWebServicesNodes.java             |   191 +-
 .../src/test/resources/resource-types-test.xml     |    22 +
 .../hadoop-yarn-server-router/pom.xml              |     9 +-
 .../webapp/DefaultRequestInterceptorREST.java      |    12 +
 .../router/webapp/FederationInterceptorREST.java   |   163 +-
 .../server/router/webapp/RouterWebServices.java    |    19 +
 .../webapp/MockDefaultRequestInterceptorREST.java  |    12 +
 .../router/webapp/MockRESTRequestInterceptor.java  |     9 +-
 .../webapp/PassThroughRESTRequestInterceptor.java  |     9 +
 .../webapp/TestFederationInterceptorREST.java      |    22 +
 .../router/webapp/TestRouterWebServicesREST.java   |    54 +-
 .../hadoop-yarn-server-sharedcachemanager/pom.xml  |     2 +-
 .../server/sharedcachemanager/TestCleanerTask.java |     7 +-
 .../TestSCMAdminProtocolService.java               |     4 +-
 .../store/TestInMemorySCMStore.java                |     3 +-
 .../hadoop-yarn-server-tests/pom.xml               |     2 +-
 .../TestTimelineServiceClientIntegration.java      |     2 +-
 .../security/TestTimelineAuthFilterForV2.java      |     4 +-
 .../pom.xml                                        |     2 +-
 .../pom.xml                                        |     2 +-
 .../storage/DataGeneratorForTest.java              |     2 +-
 .../storage/HBaseTimelineSchemaCreator.java        |   378 +
 .../storage/TimelineSchemaCreator.java             |   378 -
 .../hadoop-yarn-server-timelineservice/pom.xml     |     2 +-
 .../collector/NodeTimelineCollectorManager.java    |    15 +-
 .../reader/TimelineReaderWebServices.java          |    27 +-
 .../storage/NoOpTimelineReaderImpl.java            |    80 +
 .../storage/NoOpTimelineWriterImpl.java            |    88 +
 .../timelineservice/storage/SchemaCreator.java     |    28 +
 .../storage/TimelineSchemaCreator.java             |    80 +
 .../collector/TestNMTimelineCollectorManager.java  |     8 +-
 .../TestPerNodeTimelineCollectorsAuxService.java   |     2 +-
 .../collector/TestTimelineCollector.java           |     2 +-
 ...TimelineReaderWhitelistAuthorizationFilter.java |     2 +-
 .../storage/DummyTimelineSchemaCreator.java        |    29 +
 .../storage/TestTimelineSchemaCreator.java         |    41 +
 .../hadoop-yarn-server-web-proxy/pom.xml           |     2 +-
 .../server/webproxy/amfilter/TestAmFilter.java     |     3 +-
 .../src/site/markdown/DockerContainers.md          |   128 +-
 .../src/site/markdown/NodeManager.md               |     7 +-
 .../src/site/markdown/NodeManagerRest.md           |     7 +-
 .../site/markdown/OpportunisticContainers.md.vm    |     1 +
 .../src/site/markdown/PlacementConstraints.md.vm   |     2 +-
 .../src/site/markdown/ResourceManagerRest.md       |   114 +
 .../src/site/markdown/TimelineServiceV2.md         |     2 +-
 .../src/site/markdown/UsingGpus.md                 |     9 +-
 .../site/markdown/yarn-service/Configurations.md   |     3 +-
 .../main/webapp/app/components/timeline-view.js    |    12 +-
 .../main/webapp/app/controllers/yarn-app/logs.js   |     9 +-
 .../controllers/yarn-component-instances/info.js   |    13 +
 .../src/main/webapp/app/helpers/log-files-comma.js |     2 +-
 .../src/main/webapp/app/initializers/loader.js     |     4 +-
 .../src/main/webapp/app/models/cluster-metric.js   |     2 +-
 .../src/main/webapp/app/models/yarn-app-attempt.js |     6 +
 .../webapp/app/models/yarn-component-instance.js   |     7 +
 .../src/main/webapp/app/models/yarn-container.js   |     9 +-
 .../webapp/app/models/yarn-timeline-container.js   |     9 +-
 .../src/main/webapp/app/routes/yarn-app/logs.js    |     6 +
 .../main/webapp/app/routes/yarn-node-container.js  |     8 +-
 .../webapp/app/serializers/yarn-flowrun-brief.js   |    11 +-
 .../main/webapp/app/serializers/yarn-flowrun.js    |    11 +-
 .../app/templates/components/app-attempt-table.hbs |     8 +-
 .../app/templates/components/container-table.hbs   |     8 +-
 .../src/main/webapp/app/templates/yarn-app.hbs     |     4 +-
 .../main/webapp/app/templates/yarn-app/charts.hbs  |     2 +-
 .../main/webapp/app/templates/yarn-apps/apps.hbs   |     2 +
 .../app/templates/yarn-component-instance/info.hbs |     4 +
 .../main/webapp/app/templates/yarn-node/info.hbs   |     4 +-
 .../hadoop-yarn-ui/src/main/webapp/bower.json      |     2 +-
 .../tests/unit/models/cluster-metric-test.js       |     2 +-
 pom.xml                                            |    12 +-
 1788 files changed, 164350 insertions(+), 18868 deletions(-)
 create mode 100644 hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_3.1.2.xml
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSBuilder.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FutureDataInputStreamBuilder.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AbstractFSBuilderImpl.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureDataInputStreamBuilderImpl.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/WrappedIOException.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/package-info.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/PassthroughCodec.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
 create mode 100644 hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LambdaUtils.java
 create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md
 create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md
 delete mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.2/CHANGELOG.3.1.2.md
 create mode 100644 hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.2/CHANGES.3.1.2.md
 create mode 100644 hadoop-common-project/hadoop-common/src/site/resources/images/faircallqueue-overview.png
 create mode 100644 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTail.java
 create mode 100644 hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/KMSBenchmark.java
 create mode 100644 hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/ClientCredentialInterceptor.java
 create mode 100644 hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 delete mode 100644 hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolPB.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolServerSideTranslatorPB.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/package-info.java
 delete mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientAsyncReply.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientReply.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/exception/SCMSecurityException.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/exception/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenException.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/OzoneBlockTokenIdentifier.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/OzoneBlockTokenSelector.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/TokenVerifier.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/BaseApprover.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateApprover.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultApprover.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/DefaultCAProfile.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/DefaultProfile.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/PKIProfile.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/PKIProfiles/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DNCertificateClient.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/utils/CertificateSignRequest.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/utils/SelfSignedCertificate.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/utils/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/exceptions/CertificateException.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/exceptions/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/KeyCodec.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/GrpcClientInterceptor.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/GrpcServerInterceptor.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/TraceAllMethod.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/TracingUtil.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/package-info.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneSecurityUtil.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RetriableTask.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/Scheduler.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/VersionInfo.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBCheckpointSnapshot.java
 create mode 100644 hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBCheckpointManager.java
 create mode 100644 hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/token/TestOzoneBlockTokenIdentifier.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/token/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockApprover.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockCAStore.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/TestDefaultCAServer.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/TestDefaultProfile.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestCertificateClientInit.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/utils/TestCertificateCodec.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/utils/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestKeyCodec.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/package-info.java
 create mode 100644 hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestRetriableTask.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteContainerCommandHandler.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ServerCredentialInterceptor.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/AbstractFuture.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolumeChecker.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/ThrottledAsyncChecker.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/TimeoutFuture.java
 create mode 100644 hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteContainerCommand.java
 create mode 100644 hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestHddsVolumeChecker.java
 create mode 100644 hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSetDiskChecks.java
 create mode 100644 hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMarkUnhealthy.java
 create mode 100644 hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandlerWithUnhealthyContainer.java
 create mode 100644 hadoop-hdds/docs/content/AuditParser.md
 create mode 100644 hadoop-hdds/docs/content/S3Commands.md
 create mode 100644 hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/DeleteContainerCommandWatcher.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/BigIntegerCodec.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/DeletedBlocksTransactionCodec.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/LongCodec.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/SCMMetadataStore.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/SCMMetadataStoreRDBImpl.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/X509CertificateCodec.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/package-info.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NonHealthyToHealthyNodeHandler.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMConfigurator.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
 delete mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMStorage.java
 create mode 100644 hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMStorageConfig.java
 create mode 100644 hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/CMakeLists.txt
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/README.md
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/libhdfs_read.c
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/libhdfs_write.c
 create mode 100755 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/test-libhdfs.sh
 delete mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_read.c
 delete mode 100644 hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_write.c
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.1.2.xml
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.0.xml
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BalancerProtocols.java
 delete mode 100755 hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh
 create mode 100644 hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockTokenWrappingQOP.java
 create mode 100644 hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.1.2.xml
 create mode 100644 hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_3.1.2.xml
 create mode 100644 hadoop-ozone/Jenkinsfile
 create mode 100644 hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneMultipartUploadPartListParts.java
 create mode 100644 hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntry.java
 delete mode 100644 hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
 create mode 100644 hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 create mode 100644 hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/OzoneKMSUtil.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/hdds/protocol/package-info.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneIllegalArgumentException.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
 rename hadoop-ozone/{ozone-manager => common}/src/main/java/org/apache/hadoop/ozone/om/exceptions/package-info.java (100%)
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/BucketEncryptionKeyInfo.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/EncryptionBucketInfo.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/KeyValueUtil.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadCompleteInfo.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPartInfo.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/S3SecretValue.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/WithMetadata.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerSecurityProtocol.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretKey.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretStore.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecurityException.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneTokenIdentifier.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/package-info.java
 create mode 100644 hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/package-info.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/exceptions/TestResultCodes.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmBucketInfo.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmKeyInfo.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/package-info.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java
 create mode 100644 hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
 create mode 100755 hadoop-ozone/dev-support/checks/acceptance.sh
 create mode 100755 hadoop-ozone/dev-support/checks/author.sh
 create mode 100755 hadoop-ozone/dev-support/checks/build.sh
 create mode 100755 hadoop-ozone/dev-support/checks/checkstyle.sh
 create mode 100755 hadoop-ozone/dev-support/checks/findbugs.sh
 create mode 100755 hadoop-ozone/dev-support/checks/isolation.sh
 create mode 100755 hadoop-ozone/dev-support/checks/rat.sh
 create mode 100755 hadoop-ozone/dev-support/checks/unit.sh
 create mode 100644 hadoop-ozone/dev-support/docker/Dockerfile
 delete mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_client_failure.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_datanode_isolation.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_flaky.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure_three_nodes_isolate.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_mixed_failure_two_nodes.py
 create mode 100644 hadoop-ozone/dist/src/main/blockade/test_blockade_scm_isolation.py
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-compose.yaml
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/.env
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/README.md
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/README.md
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/krb5.conf
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/launcher.sh
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/Dockerfile
 create mode 100755 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/build.sh
 create mode 100755 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/envtoconf.py
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/krb5.conf
 create mode 100755 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
 create mode 100755 hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/transformation.py
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonetrace/docker-compose.yaml
 create mode 100644 hadoop-ozone/dist/src/main/compose/ozonetrace/docker-config
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/auditparser/auditparser.robot
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/s3/webui.robot
 create mode 100644 hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineUtils.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/CertificateClientTestImpl.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/package-info.java
 delete mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/TestOzoneRestClient.java
 delete mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rest/package-info.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestHybridPipelineOnDatanode.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientWithRatis.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestReadRetries.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestSecureOzoneRpcClient.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestDeleteContainerHandler.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/package-info.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerWithTLS.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 create mode 100644 hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/ca.crt
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/ca.key
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/client.crt
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/client.csr
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/client.key
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/client.pem
 create mode 100755 hadoop-ozone/integration-test/src/test/resources/ssl/generate.sh
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/server.crt
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/server.csr
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/server.key
 create mode 100644 hadoop-ozone/integration-test/src/test/resources/ssl/server.pem
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDbSnapshotServlet.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java
 delete mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/RenameKeyHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/s3/GetS3SecretHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/s3/S3Commands.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/s3/package-info.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/CancelTokenHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/GetTokenHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/RenewTokenHandler.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/TokenCommands.java
 create mode 100644 hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/package-info.java
 create mode 100644 hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneManagerBlockToken.java
 create mode 100644 hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneTokenIdentifier.java
 create mode 100644 hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/package-info.java
 create mode 100644 hadoop-ozone/ozonefs-lib-legacy/pom.xml
 create mode 100644 hadoop-ozone/ozonefs-lib-legacy/src/main/resources/ozonefs.txt
 create mode 100644 hadoop-ozone/ozonefs-lib/pom.xml
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicKeyInfo.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/FilteredClassLoader.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterFactory.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterImpl.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSStorageStatistics.java
 create mode 100644 hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/Statistic.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/CompleteMultipartUploadRequest.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/CompleteMultipartUploadResponse.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ListPartsResponse.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/MultipartUploadInitiateResponse.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/ContinueToken.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/RangeHeaderParserUtil.java
 delete mode 100644 hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
 create mode 100644 hadoop-ozone/s3gateway/src/main/resources/webapps/static/index.html
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneOutputStreamStub.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestAbortMultipartUpload.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestInitiateMultipartUpload.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestListParts.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestMultipartUploadComplete.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPartUpload.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/util/TestContinueToken.java
 create mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/util/TestRangeHeaderParserUtil.java
 delete mode 100644 hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/util/TestS3utils.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/AuditParser.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/common/DatabaseHelper.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/common/ParserConsts.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/common/package-info.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/handler/LoadCommandHandler.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/handler/QueryCommandHandler.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/handler/TemplateCommandHandler.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/handler/package-info.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/model/AuditEntry.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/model/package-info.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/audit/parser/package-info.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/BlockIdDetails.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/ContainerMapper.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/fsck/package-info.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkBlockManager.java
 create mode 100644 hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOMKeyAllocation.java
 create mode 100644 hadoop-ozone/tools/src/main/resources/commands.properties
 create mode 100644 hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 create mode 100644 hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/package-info.java
 create mode 100644 hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/fsck/TestContainerMapper.java
 create mode 100644 hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/fsck/package-info.java
 create mode 100644 hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/scm/package-info.java
 create mode 100644 hadoop-ozone/tools/src/test/resources/commands.properties
 create mode 100644 hadoop-ozone/tools/src/test/resources/testaudit.log
 copy {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/README.md (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/pom.xml
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/base/ubuntu-16.04/Dockerfile.cpu.tf_1.8.0 (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/base/ubuntu-16.04/Dockerfile.gpu.tf_1.8.0 (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/build-all.sh (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/Dockerfile.cpu.tf_1.8.0 (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/Dockerfile.gpu.tf_1.8.0 (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/README.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/cifar10.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/cifar10_main.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/cifar10_model.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/cifar10_utils.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/generate_cifar10_tfrecords.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/with-cifar10-models/ubuntu-16.04/cifar10_estimator_tf_1.8.0/model_base.py (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/zeppelin-notebook-example/Dockerfile.gpu (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/zeppelin-notebook-example/run_container.sh (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/zeppelin-notebook-example/shiro.ini (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/docker/zeppelin-notebook-example/zeppelin-site.xml (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/AbstractCli.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/Cli.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/CliConstants.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/CliUtils.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/RunJobCli.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/ShowJobCli.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/BaseParameters.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/Localization.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/Quicklink.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/RunJobParameters.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/RunParameters.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/ShowJobParameters.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/package-info.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/ClientContext.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/Envs.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/common/api/JobComponentStatus.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/api/JobState.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/common/api/JobStatus.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/api/TaskType.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/conf/SubmarineConfiguration.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/conf/SubmarineLogs.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/exception/SubmarineException.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/exception/SubmarineRuntimeException.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/fs/DefaultRemoteDirectoryManager.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/common/fs/RemoteDirectoryManager.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/RuntimeFactory.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/FSBasedSubmarineStorageImpl.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/JobMonitor.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/JobSubmitter.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/StorageKeyConstants.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/common/SubmarineStorage.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/DeveloperGuide.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/Examples.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/HowToInstall.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/Index.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/InstallationGuide.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/InstallationGuideChineseVersion.md (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/site/markdown/QuickStart.md
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/RunningDistributedCifar10TFJobs.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/RunningZeppelinOnYARN.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/TestAndTroubleshooting.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/markdown/WriteDockerfile.md (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/resources/css/site.css (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/resources/images/job-logs-ui.png (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/resources/images/multiple-tensorboard-jobs.png (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/resources/images/submarine-installer.gif (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/resources/images/tensorboard-service.png (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/site/site.xml (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-core/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/TestRunJobCliParsing.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/TestShowJobCliParsing.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/java/org/apache/hadoop/yarn/submarine/common/MockClientContext.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/java/org/apache/hadoop/yarn/submarine/common/fs/MockRemoteDirectoryManager.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/java/org/apache/hadoop/yarn/submarine/runtimes/common/MemorySubmarineStorage.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/java/org/apache/hadoop/yarn/submarine/runtimes/common/TestFSBasedSubmarineStorage.java (100%)
 copy {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/resources/core-site.xml (100%)
 copy {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-core}/src/test/resources/hdfs-site.xml (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/README.md (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/pom.xml
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobMonitor.java
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceRuntimeFactory.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceUtils.java (100%)
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/builder/JobComponentStatusBuilder.java
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/builder/JobStatusBuilder.java
 create mode 100644 hadoop-submarine/hadoop-submarine-yarnservice-runtime/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/yarnservice/TestYarnServiceRunJobCli.java
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/yarnservice/YarnServiceCliTestUtils.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/test/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/TestTFConfigGenerator.java (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/test/resources/core-site.xml (100%)
 rename {hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine => hadoop-submarine/hadoop-submarine-yarnservice-runtime}/src/test/resources/hdfs-site.xml (100%)
 create mode 100644 hadoop-submarine/pom.xml
 delete mode 100644 hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AssumedRoleCredentialProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractAWSCredentialProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AbstractSessionCredentialsProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/IAMInstanceCredentialsProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentialBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentialProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentials.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/NoAwsCredentialsException.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AWSPolicyProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDTService.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationTokenBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractS3ATokenIdentifier.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/DelegationConstants.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/DelegationTokenIOException.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecretOperations.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecrets.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/FullCredentialsTokenBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/FullCredentialsTokenIdentifier.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/RoleTokenBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/RoleTokenIdentifier.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADtFetcher.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/SessionTokenIdentifier.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/package-info.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/InternalSelectConstants.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/SelectBinding.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/SelectConstants.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/SelectInputStream.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/SelectTool.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/select/package-info.java
 create mode 100644 hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher
 create mode 100644 hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
 create mode 100644 hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_token_architecture.md
 create mode 100644 hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md
 create mode 100644 hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md
 delete mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSECBlockOutputStream.java
 delete mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSUserDefinedKeyBlockOutputStream.java
 delete mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSES3BlockOutputStream.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/TestMarshalledCredentials.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/AbstractDelegationIT.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/CountInvocationsProvider.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/Csvout.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ILoadTestRoleCredentials.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ILoadTestSessionCredentials.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestDelegatedMRJob.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestRoleDelegationInFileystem.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestRoleDelegationTokens.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFileystem.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationTokens.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/MiniKerberizedHadoopCluster.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/TestS3ADelegationTokenSupport.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardToolTestHelper.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/NanoTimerStats.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/AbstractS3SelectTest.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/CsvFile.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3Select.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectCLI.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectLandsat.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/select/ITestS3SelectMRJob.java
 create mode 100644 hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/mapreduce/MockJob.java
 create mode 100644 hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/IdentityTransformer.java
 create mode 100644 hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsIdentityTransformer.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Client_3.1.2.xml
 create mode 100644 hadoop-yarn-project/hadoop-yarn/dev-support/jdiff/Apache_Hadoop_YARN_Common_3.1.2.xml
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/CsiAdaptorPlugin.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesRequest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetLocalizationStatusesResponse.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/NodePublishVolumeRequest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/NodePublishVolumeResponse.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/NodeUnpublishVolumeRequest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/NodeUnpublishVolumeResponse.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationState.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalizationStatus.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/CsiConfigUtils.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/csi/package-info.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDSWithMultipleNodeManager.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShellWithNodeLabels.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/LocalizationStatus.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/pom.xml
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/CliUtils.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/RunJobParameters.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/common/api/JobComponentStatus.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/common/api/JobStatus.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/common/api/builder/JobComponentStatusBuilder.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/common/api/builder/JobStatusBuilder.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/RuntimeFactory.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobMonitor.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/QuickStart.md
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/TestRunJobCliParsing.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/yarnservice/TestYarnServiceRunJobCli.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetLocalizationStatusesRequestPBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetLocalizationStatusesResponsePBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/NodePublishVolumeRequestPBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/NodePublishVolumeResponsePBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/NodeUnpublishVolumeRequestPBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/NodeUnpublishVolumeResponsePBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalizationStatusPBImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/resources/resource-types/resource-types-6.xml
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/adaptor/CsiAdaptorFactory.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/adaptor/CsiAdaptorServices.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/adaptor/DefaultCsiAdaptorImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/translator/NodePublishVolumeRequestProtoTranslator.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/translator/NodeUnpublishVolumeRequestProtoTranslator.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/main/java/org/apache/hadoop/yarn/csi/utils/ConfigUtils.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/test/java/org/apache/hadoop/yarn/csi/adaptor/MockCsiAdaptor.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/test/java/org/apache/hadoop/yarn/csi/adaptor/TestNodePublishVolumeRequest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/test/java/org/apache/hadoop/yarn/csi/client/ICsiClientTest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/webapp/ContainerBlockTest.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/com/nvidia/NvidiaGPUPluginForRuntimeV2.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/com/nvidia/package-info.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DeviceResourceDockerRuntimePluginImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/ShellWrapper.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV2CommandPlugin.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/volume/csi/ContainerVolumePublisher.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/volume/csi/package-info.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/devices/devices-module.c
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/devices/devices-module.h
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/devices/test-devices-module.cc
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceSet.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/TestNvidiaDockerV2CommandPlugin.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/nvidia/com/TestNvidiaGpuPlugin.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceOptionInfo.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/package-info.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSAllocateCustomResource.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/resource-types-test.xml
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineSchemaCreator.java
 delete mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/NoOpTimelineReaderImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/NoOpTimelineWriterImpl.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/SchemaCreator.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DummyTimelineSchemaCreator.java
 create mode 100644 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineSchemaCreator.java


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 35/41: HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit bc8317f7dc84db57962d31ffc1b1bcb116e052a6
Author: Surendra Singh Lilhore <su...@apache.org>
AuthorDate: Tue Feb 5 10:03:04 2019 +0530

    HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar.
---
 .../apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java   | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index 2df883c..f0bf271 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -78,6 +78,7 @@ import org.apache.hadoop.hdfs.MiniDFSCluster.NameNodeInfo;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology.NNConf;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology.NSConf;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
@@ -87,6 +88,7 @@ import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterClient;
 import org.apache.hadoop.hdfs.server.namenode.FSImage;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.net.NetUtils;
@@ -489,6 +491,9 @@ public class MiniRouterDFSCluster {
             "0.0.0.0");
         conf.set(DFS_NAMENODE_HTTPS_ADDRESS_KEY + "." + suffix,
             "127.0.0.1:" + context.httpsPort);
+        conf.set(
+            HdfsClientConfigKeys.Failover.PROXY_PROVIDER_KEY_PREFIX + "." + ns,
+            ConfiguredFailoverProxyProvider.class.getName());
 
         // If the service port is enabled by default, we need to set them up
         boolean servicePortEnabled = false;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 10/41: HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 53b69da61041afb6807f1a79b2f9fa4bd6901c38
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Thu Nov 22 00:34:08 2018 +0530

    HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar.
---
 .../federation/router/RouterClientProtocol.java    |  24 ++--
 .../federation/router/RouterStoragePolicy.java     |  98 ++++++++++++++
 .../server/federation/MiniRouterDFSCluster.java    |  13 ++
 .../server/federation/router/TestRouterRpc.java    |  57 ++++++++
 .../TestRouterRpcStoragePolicySatisfier.java       | 149 +++++++++++++++++++++
 5 files changed, 325 insertions(+), 16 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 6c44362..81717ca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -121,6 +121,8 @@ public class RouterClientProtocol implements ClientProtocol {
   private final String superGroup;
   /** Erasure coding calls. */
   private final ErasureCoding erasureCoding;
+  /** StoragePolicy calls. **/
+  private final RouterStoragePolicy storagePolicy;
 
   RouterClientProtocol(Configuration conf, RouterRpcServer rpcServer) {
     this.rpcServer = rpcServer;
@@ -138,6 +140,7 @@ public class RouterClientProtocol implements ClientProtocol {
         DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY,
         DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
     this.erasureCoding = new ErasureCoding(rpcServer);
+    this.storagePolicy = new RouterStoragePolicy(rpcServer);
   }
 
   @Override
@@ -272,22 +275,12 @@ public class RouterClientProtocol implements ClientProtocol {
   @Override
   public void setStoragePolicy(String src, String policyName)
       throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
-
-    List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, true);
-    RemoteMethod method = new RemoteMethod("setStoragePolicy",
-        new Class<?>[] {String.class, String.class},
-        new RemoteParam(), policyName);
-    rpcClient.invokeSequential(locations, method, null, null);
+    storagePolicy.setStoragePolicy(src, policyName);
   }
 
   @Override
   public BlockStoragePolicy[] getStoragePolicies() throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.READ);
-
-    RemoteMethod method = new RemoteMethod("getStoragePolicies");
-    String ns = subclusterResolver.getDefaultNamespace();
-    return (BlockStoragePolicy[]) rpcClient.invokeSingle(ns, method);
+    return storagePolicy.getStoragePolicies();
   }
 
   @Override
@@ -1457,13 +1450,12 @@ public class RouterClientProtocol implements ClientProtocol {
 
   @Override
   public void unsetStoragePolicy(String src) throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
+    storagePolicy.unsetStoragePolicy(src);
   }
 
   @Override
   public BlockStoragePolicy getStoragePolicy(String path) throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
-    return null;
+    return storagePolicy.getStoragePolicy(path);
   }
 
   @Override
@@ -1551,7 +1543,7 @@ public class RouterClientProtocol implements ClientProtocol {
 
   @Override
   public void satisfyStoragePolicy(String path) throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
+    storagePolicy.satisfyStoragePolicy(path);
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
new file mode 100644
index 0000000..7145940
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Module that implements all the RPC calls in
+ * {@link org.apache.hadoop.hdfs.protocol.ClientProtocol} related to
+ * Storage Policy in the {@link RouterRpcServer}.
+ */
+public class RouterStoragePolicy {
+
+  /** RPC server to receive client calls. */
+  private final RouterRpcServer rpcServer;
+  /** RPC clients to connect to the Namenodes. */
+  private final RouterRpcClient rpcClient;
+  /** Interface to map global name space to HDFS subcluster name spaces. */
+  private final FileSubclusterResolver subclusterResolver;
+
+  public RouterStoragePolicy(RouterRpcServer server) {
+    this.rpcServer = server;
+    this.rpcClient = this.rpcServer.getRPCClient();
+    this.subclusterResolver = this.rpcServer.getSubclusterResolver();
+  }
+
+  public void setStoragePolicy(String src, String policyName)
+      throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
+
+    List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, true);
+    RemoteMethod method = new RemoteMethod("setStoragePolicy",
+        new Class<?>[] {String.class, String.class},
+        new RemoteParam(),
+        policyName);
+    rpcClient.invokeSequential(locations, method, null, null);
+  }
+
+  public BlockStoragePolicy[] getStoragePolicies() throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.READ);
+
+    RemoteMethod method = new RemoteMethod("getStoragePolicies");
+    String ns = subclusterResolver.getDefaultNamespace();
+    return (BlockStoragePolicy[]) rpcClient.invokeSingle(ns, method);
+  }
+
+  public void unsetStoragePolicy(String src) throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
+
+    List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, true);
+    RemoteMethod method = new RemoteMethod("unsetStoragePolicy",
+        new Class<?>[] {String.class},
+        new RemoteParam());
+    rpcClient.invokeSequential(locations, method);
+  }
+
+  public BlockStoragePolicy getStoragePolicy(String path)
+      throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
+
+    List<RemoteLocation> locations = rpcServer.getLocationsForPath(path, false);
+    RemoteMethod method = new RemoteMethod("getStoragePolicy",
+        new Class<?>[] {String.class},
+        new RemoteParam());
+    return (BlockStoragePolicy) rpcClient.invokeSequential(locations, method);
+  }
+
+  public void satisfyStoragePolicy(String path) throws IOException {
+    rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
+
+    List<RemoteLocation> locations = rpcServer.getLocationsForPath(path, true);
+    RemoteMethod method = new RemoteMethod("satisfyStoragePolicy",
+        new Class<?>[] {String.class},
+        new RemoteParam());
+    rpcClient.invokeSequential(locations, method);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index a5693a6..2df883c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -67,6 +67,7 @@ import org.apache.hadoop.fs.FileContext;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
 import org.apache.hadoop.hdfs.DFSClient;
@@ -118,6 +119,8 @@ public class MiniRouterDFSCluster {
   private boolean highAvailability;
   /** Number of datanodes per nameservice. */
   private int numDatanodesPerNameservice = 2;
+  /** Custom storage type for each datanode. */
+  private StorageType[][] storageTypes = null;
 
   /** Mini cluster. */
   private MiniDFSCluster cluster;
@@ -615,6 +618,15 @@ public class MiniRouterDFSCluster {
   }
 
   /**
+   * Set custom storage type configuration for each datanode.
+   * If storageTypes is uninitialized or passed null then
+   * StorageType.DEFAULT is used.
+   */
+  public void setStorageTypes(StorageType[][] storageTypes) {
+    this.storageTypes = storageTypes;
+  }
+
+  /**
    * Set the DNs to belong to only one subcluster.
    */
   public void setIndependentDNs() {
@@ -767,6 +779,7 @@ public class MiniRouterDFSCluster {
           .numDataNodes(numDNs)
           .nnTopology(topology)
           .dataNodeConfOverlays(dnConfs)
+          .storageTypes(storageTypes)
           .build();
       cluster.waitActive();
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index 204366e..8632203 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -72,6 +72,7 @@ import org.apache.hadoop.hdfs.protocol.ECBlockGroupStats;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyState;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@@ -770,6 +771,62 @@ public class TestRouterRpc {
   }
 
   @Test
+  public void testProxyGetAndUnsetStoragePolicy() throws Exception {
+    String file = "/testGetStoragePolicy";
+    String nnFilePath = cluster.getNamenodeTestDirectoryForNS(ns) + file;
+    String routerFilePath = cluster.getFederatedTestDirectoryForNS(ns) + file;
+
+    createFile(routerFS, routerFilePath, 32);
+
+    // Get storage policy via router
+    BlockStoragePolicy policy = routerProtocol.getStoragePolicy(routerFilePath);
+    // Verify default policy is HOT
+    assertEquals(HdfsConstants.HOT_STORAGE_POLICY_NAME, policy.getName());
+    assertEquals(HdfsConstants.HOT_STORAGE_POLICY_ID, policy.getId());
+
+    // Get storage policies via router
+    BlockStoragePolicy[] policies = routerProtocol.getStoragePolicies();
+    BlockStoragePolicy[] nnPolicies = namenode.getClient().getStoragePolicies();
+    // Verify policie returned by router is same as policies returned by NN
+    assertArrayEquals(nnPolicies, policies);
+
+    BlockStoragePolicy newPolicy = policies[0];
+    while (newPolicy.isCopyOnCreateFile()) {
+      // Pick a non copy on create policy. Beacuse if copyOnCreateFile is set
+      // then the policy cannot be changed after file creation.
+      Random rand = new Random();
+      int randIndex = rand.nextInt(policies.length);
+      newPolicy = policies[randIndex];
+    }
+    routerProtocol.setStoragePolicy(routerFilePath, newPolicy.getName());
+
+    // Get storage policy via router
+    policy = routerProtocol.getStoragePolicy(routerFilePath);
+    // Verify default policy
+    assertEquals(newPolicy.getName(), policy.getName());
+    assertEquals(newPolicy.getId(), policy.getId());
+
+    // Verify policy via NN
+    BlockStoragePolicy nnPolicy =
+        namenode.getClient().getStoragePolicy(nnFilePath);
+    assertEquals(nnPolicy.getName(), policy.getName());
+    assertEquals(nnPolicy.getId(), policy.getId());
+
+    // Unset storage policy via router
+    routerProtocol.unsetStoragePolicy(routerFilePath);
+
+    // Get storage policy
+    policy = routerProtocol.getStoragePolicy(routerFilePath);
+    assertEquals(HdfsConstants.HOT_STORAGE_POLICY_NAME, policy.getName());
+    assertEquals(HdfsConstants.HOT_STORAGE_POLICY_ID, policy.getId());
+
+    // Verify policy via NN
+    nnPolicy = namenode.getClient().getStoragePolicy(nnFilePath);
+    assertEquals(nnPolicy.getName(), policy.getName());
+    assertEquals(nnPolicy.getId(), policy.getId());
+  }
+
+  @Test
   public void testProxyGetPreferedBlockSize() throws Exception {
 
     // Query via NN and Router and verify
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcStoragePolicySatisfier.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcStoragePolicySatisfier.java
new file mode 100644
index 0000000..fa1079a
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcStoragePolicySatisfier.java
@@ -0,0 +1,149 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.balancer.NameNodeConnector;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics;
+import org.apache.hadoop.hdfs.server.namenode.sps.Context;
+import org.apache.hadoop.hdfs.server.namenode.sps.StoragePolicySatisfier;
+import org.apache.hadoop.hdfs.server.sps.ExternalSPSContext;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test StoragePolicySatisfy through router rpc calls.
+ */
+public class TestRouterRpcStoragePolicySatisfier {
+
+  /** Federated HDFS cluster. */
+  private static MiniRouterDFSCluster cluster;
+
+  /** Client interface to the Router. */
+  private static ClientProtocol routerProtocol;
+
+  /** Filesystem interface to the Router. */
+  private static FileSystem routerFS;
+  /** Filesystem interface to the Namenode. */
+  private static FileSystem nnFS;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+    cluster = new MiniRouterDFSCluster(false, 1);
+    // Set storage types for the cluster
+    StorageType[][] newtypes = new StorageType[][] {
+        {StorageType.ARCHIVE, StorageType.DISK}};
+    cluster.setStorageTypes(newtypes);
+
+    Configuration conf = cluster.getNamenodes().get(0).getConf();
+    conf.set(DFSConfigKeys.DFS_STORAGE_POLICY_SATISFIER_MODE_KEY,
+        HdfsConstants.StoragePolicySatisfierMode.EXTERNAL.toString());
+    // Reduced refresh cycle to update latest datanodes.
+    conf.setLong(DFSConfigKeys.DFS_SPS_DATANODE_CACHE_REFRESH_INTERVAL_MS,
+        1000);
+    cluster.addNamenodeOverrides(conf);
+
+    cluster.setNumDatanodesPerNameservice(1);
+
+    // Start NNs and DNs and wait until ready
+    cluster.startCluster();
+
+    // Start routers with only an RPC service
+    Configuration routerConf = new RouterConfigBuilder()
+        .metrics()
+        .rpc()
+        .build();
+    // We decrease the DN cache times to make the test faster
+    routerConf.setTimeDuration(
+        RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
+    cluster.addRouterOverrides(routerConf);
+    cluster.startRouters();
+
+    // Register and verify all NNs with all routers
+    cluster.registerNamenodes();
+    cluster.waitNamenodeRegistration();
+
+    // Create mock locations
+    cluster.installMockLocations();
+
+    // Random router for this test
+    MiniRouterDFSCluster.RouterContext rndRouter = cluster.getRandomRouter();
+
+    routerProtocol = rndRouter.getClient().getNamenode();
+    routerFS = rndRouter.getFileSystem();
+    nnFS = cluster.getNamenodes().get(0).getFileSystem();
+
+    NameNodeConnector nnc = DFSTestUtil.getNameNodeConnector(conf,
+        HdfsServerConstants.MOVER_ID_PATH, 1, false);
+
+    StoragePolicySatisfier externalSps = new StoragePolicySatisfier(conf);
+    Context externalCtxt = new ExternalSPSContext(externalSps, nnc);
+
+    externalSps.init(externalCtxt);
+    externalSps.start(HdfsConstants.StoragePolicySatisfierMode.EXTERNAL);
+  }
+
+  @AfterClass
+  public static void tearDown() {
+    cluster.shutdown();
+  }
+
+  @Test
+  public void testStoragePolicySatisfier() throws Exception {
+    final String file = "/testStoragePolicySatisfierCommand";
+    short repl = 1;
+    int size = 32;
+    DFSTestUtil.createFile(routerFS, new Path(file), size, repl, 0);
+    // Varify storage type is DISK
+    DFSTestUtil.waitExpectedStorageType(file, StorageType.DISK, 1, 20000,
+        (DistributedFileSystem) routerFS);
+    // Set storage policy as COLD
+    routerProtocol
+        .setStoragePolicy(file, HdfsConstants.COLD_STORAGE_POLICY_NAME);
+    // Verify storage policy is set properly
+    BlockStoragePolicy storagePolicy = routerProtocol.getStoragePolicy(file);
+    assertEquals(HdfsConstants.COLD_STORAGE_POLICY_NAME,
+        storagePolicy.getName());
+    // Invoke satisfy storage policy
+    routerProtocol.satisfyStoragePolicy(file);
+    // Verify storage type is ARCHIVE
+    DFSTestUtil.waitExpectedStorageType(file, StorageType.ARCHIVE, 1, 20000,
+        (DistributedFileSystem) routerFS);
+
+    // Verify storage type via NN
+    DFSTestUtil.waitExpectedStorageType(file, StorageType.ARCHIVE, 1, 20000,
+        (DistributedFileSystem) nnFS);
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 30/41: HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 30a5fba34cd621dff0f6ec65f2263b6209dafec3
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Wed Jan 23 22:59:43 2019 +0800

    HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.
---
 .../hdfs/server/federation/router/Quota.java       |  7 ++++-
 .../server/federation/router/TestRouterQuota.java  | 32 +++++++++++++++++++++-
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index cfb538f..a6f5bab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -216,6 +216,11 @@ public class Quota {
         locations.addAll(rpcServer.getLocationsForPath(childPath, true, false));
       }
     }
-    return locations;
+    if (locations.size() >= 1) {
+      return locations;
+    } else {
+      locations.addAll(rpcServer.getLocationsForPath(path, true, false));
+      return locations;
+    }
   }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
index 656b401..034023c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
@@ -755,4 +755,34 @@ public class TestRouterQuota {
     assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getQuota());
     assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getSpaceQuota());
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testSetQuotaNotMountTable() throws Exception {
+    long nsQuota = 5;
+    long ssQuota = 100;
+    final FileSystem nnFs1 = nnContext1.getFileSystem();
+
+    // setQuota should run for any directory
+    MountTable mountTable1 = MountTable.newInstance("/setquotanmt",
+        Collections.singletonMap("ns0", "/testdir16"));
+
+    addMountTable(mountTable1);
+
+    // Add a directory not present in mount table.
+    nnFs1.mkdirs(new Path("/testdir16/testdir17"));
+
+    routerContext.getRouter().getRpcServer().setQuota("/setquotanmt/testdir17",
+        nsQuota, ssQuota, null);
+
+    RouterQuotaUpdateService updateService = routerContext.getRouter()
+        .getQuotaCacheUpdateService();
+    // ensure setQuota RPC call was invoked
+    updateService.periodicInvoke();
+
+    ClientProtocol client1 = nnContext1.getClient().getNamenode();
+    final QuotaUsage quota1 = client1.getQuotaUsage("/testdir16/testdir17");
+
+    assertEquals(nsQuota, quota1.getQuota());
+    assertEquals(ssQuota, quota1.getSpaceQuota());
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 09/41: HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0b67a7ddc84d5757c20046dd4431446a7b671a40
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Wed Nov 21 10:40:26 2018 +0800

    HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.
---
 .../server/federation/router/RBFConfigKeys.java    |  4 ++
 .../federation/router/RouterClientProtocol.java    | 15 ++++--
 .../server/federation/router/RouterRpcServer.java  |  9 ++++
 .../src/main/resources/hdfs-rbf-default.xml        | 10 ++++
 .../router/TestRouterRpcMultiDestination.java      | 59 ++++++++++++++++++++++
 5 files changed, 93 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index dd72e36..10018fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -125,6 +125,10 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
   public static final String DFS_ROUTER_CLIENT_REJECT_OVERLOAD =
       FEDERATION_ROUTER_PREFIX + "client.reject.overload";
   public static final boolean DFS_ROUTER_CLIENT_REJECT_OVERLOAD_DEFAULT = false;
+  public static final String DFS_ROUTER_ALLOW_PARTIAL_LIST =
+      FEDERATION_ROUTER_PREFIX + "client.allow-partial-listing";
+  public static final boolean DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT = true;
+
 
   // HDFS Router State Store connection
   public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 9e2979b..6c44362 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -112,6 +112,9 @@ public class RouterClientProtocol implements ClientProtocol {
   private final FileSubclusterResolver subclusterResolver;
   private final ActiveNamenodeResolver namenodeResolver;
 
+  /** If it requires response from all subclusters. */
+  private final boolean allowPartialList;
+
   /** Identifier for the super user. */
   private final String superUser;
   /** Identifier for the super group. */
@@ -125,6 +128,10 @@ public class RouterClientProtocol implements ClientProtocol {
     this.subclusterResolver = rpcServer.getSubclusterResolver();
     this.namenodeResolver = rpcServer.getNamenodeResolver();
 
+    this.allowPartialList = conf.getBoolean(
+        RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST,
+        RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT);
+
     // User and group for reporting
     this.superUser = System.getProperty("user.name");
     this.superGroup = conf.get(
@@ -608,8 +615,8 @@ public class RouterClientProtocol implements ClientProtocol {
         new Class<?>[] {String.class, startAfter.getClass(), boolean.class},
         new RemoteParam(), startAfter, needLocation);
     Map<RemoteLocation, DirectoryListing> listings =
-        rpcClient.invokeConcurrent(
-            locations, method, false, false, DirectoryListing.class);
+        rpcClient.invokeConcurrent(locations, method,
+            !this.allowPartialList, false, DirectoryListing.class);
 
     Map<String, HdfsFileStatus> nnListing = new TreeMap<>();
     int totalRemainingEntries = 0;
@@ -998,8 +1005,8 @@ public class RouterClientProtocol implements ClientProtocol {
       RemoteMethod method = new RemoteMethod("getContentSummary",
           new Class<?>[] {String.class}, new RemoteParam());
       Map<RemoteLocation, ContentSummary> results =
-          rpcClient.invokeConcurrent(
-              locations, method, false, false, ContentSummary.class);
+          rpcClient.invokeConcurrent(locations, method,
+              !this.allowPartialList, false, ContentSummary.class);
       summaries.addAll(results.values());
     } catch (FileNotFoundException e) {
       notFoundException = e;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index fcb35f4..ad5980b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -1484,6 +1484,15 @@ public class RouterRpcServer extends AbstractService
   }
 
   /**
+   * Get ClientProtocol module implementation.
+   * @return ClientProtocol implementation
+   */
+  @VisibleForTesting
+  public RouterClientProtocol getClientProtocolModule() {
+    return this.clientProto;
+  }
+
+  /**
    * Get RPC metrics info.
    * @return The instance of FederationRPCMetrics.
    */
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 53bf53a..09050bb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -483,6 +483,16 @@
   </property>
 
   <property>
+    <name>dfs.federation.router.client.allow-partial-listing</name>
+    <value>true</value>
+    <description>
+      If the Router can return a partial list of files in a multi-destination mount point when one of the subclusters is unavailable.
+      True may return a partial list of files if a subcluster is down.
+      False will fail the request if one is unavailable.
+    </description>
+  </property>
+
+  <property>
     <name>dfs.federation.router.keytab.file</name>
     <value></value>
     <description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
index 94b712f..3101748 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
@@ -20,6 +20,13 @@ package org.apache.hadoop.hdfs.server.federation.router;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createFile;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.verifyFileExists;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.internal.util.reflection.Whitebox.getInternalState;
+import static org.mockito.internal.util.reflection.Whitebox.setInternalState;
 
 import java.io.IOException;
 import java.lang.reflect.Method;
@@ -44,6 +51,13 @@ import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterConte
 import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.PathLocation;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.namenode.ha.HAContext;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.StandbyException;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Test;
 
 /**
  * The the RPC interface of the {@link getRouter()} implemented by
@@ -214,4 +228,49 @@ public class TestRouterRpcMultiDestination extends TestRouterRpc {
     testRename(getRouterContext(), filename1, renamedFile, false);
     testRename2(getRouterContext(), filename1, renamedFile, false);
   }
+
+  @Test
+  public void testSubclusterDown() throws Exception {
+    final int totalFiles = 6;
+
+    List<RouterContext> routers = getCluster().getRouters();
+
+    // Test the behavior when everything is fine
+    FileSystem fs = getRouterFileSystem();
+    FileStatus[] files = fs.listStatus(new Path("/"));
+    assertEquals(totalFiles, files.length);
+
+    // Simulate one of the subclusters is in standby
+    NameNode nn0 = getCluster().getNamenode("ns0", null).getNamenode();
+    FSNamesystem ns0 = nn0.getNamesystem();
+    HAContext nn0haCtx = (HAContext)getInternalState(ns0, "haContext");
+    HAContext mockCtx = mock(HAContext.class);
+    doThrow(new StandbyException("Mock")).when(mockCtx).checkOperation(any());
+    setInternalState(ns0, "haContext", mockCtx);
+
+    // router0 should throw an exception
+    RouterContext router0 = routers.get(0);
+    RouterRpcServer router0RPCServer = router0.getRouter().getRpcServer();
+    RouterClientProtocol router0ClientProtocol =
+        router0RPCServer.getClientProtocolModule();
+    setInternalState(router0ClientProtocol, "allowPartialList", false);
+    try {
+      router0.getFileSystem().listStatus(new Path("/"));
+      fail("I should throw an exception");
+    } catch (RemoteException re) {
+      GenericTestUtils.assertExceptionContains(
+          "No namenode available to invoke getListing", re);
+    }
+
+    // router1 should report partial results
+    RouterContext router1 = routers.get(1);
+    files = router1.getFileSystem().listStatus(new Path("/"));
+    assertTrue("Found " + files.length + " items, we should have less",
+        files.length < totalFiles);
+
+
+    // Restore the HA context and the Router
+    setInternalState(ns0, "haContext", nn0haCtx);
+    setInternalState(router0ClientProtocol, "allowPartialList", true);
+  }
 }
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 07/41: HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 9f362fa9b52ca3f64e0ce96c97ab9d947df43793
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Tue Nov 13 10:14:35 2018 -0800

    HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng.
---
 .../federation/metrics/FederationMetrics.java      | 12 ++++++++++--
 .../federation/metrics/NamenodeBeanMetrics.java    | 22 ++++------------------
 .../server/federation/router/RBFConfigKeys.java    |  7 +++++++
 .../src/main/resources/hdfs-rbf-default.xml        | 17 +++++++++++++++++
 .../router/TestRouterRPCClientRetries.java         |  2 +-
 .../server/federation/router/TestRouterRpc.java    |  2 +-
 6 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index 23f62b6..6a0a46e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -47,12 +47,14 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
 import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
@@ -95,7 +97,7 @@ public class FederationMetrics implements FederationMBean {
   private static final String DATE_FORMAT = "yyyy/MM/dd HH:mm:ss";
 
   /** Prevent holding the page from load too long. */
-  private static final long TIME_OUT = TimeUnit.SECONDS.toMillis(1);
+  private final long timeOut;
 
 
   /** Router interface. */
@@ -143,6 +145,12 @@ public class FederationMetrics implements FederationMBean {
       this.routerStore = stateStore.getRegisteredRecordStore(
           RouterStore.class);
     }
+
+    // Initialize the cache for the DN reports
+    Configuration conf = router.getConfig();
+    this.timeOut = conf.getTimeDuration(RBFConfigKeys.DN_REPORT_TIME_OUT,
+        RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
+
   }
 
   /**
@@ -434,7 +442,7 @@ public class FederationMetrics implements FederationMBean {
     try {
       RouterRpcServer rpcServer = this.router.getRpcServer();
       DatanodeInfo[] live = rpcServer.getDatanodeReport(
-          DatanodeReportType.LIVE, false, TIME_OUT);
+          DatanodeReportType.LIVE, false, timeOut);
 
       if (live.length > 0) {
         float totalDfsUsed = 0;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 0ca5f73..64df10c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -74,21 +74,6 @@ public class NamenodeBeanMetrics
   private static final Logger LOG =
       LoggerFactory.getLogger(NamenodeBeanMetrics.class);
 
-  /** Prevent holding the page from loading too long. */
-  private static final String DN_REPORT_TIME_OUT =
-      RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.time-out";
-  /** We only wait for 1 second. */
-  private static final long DN_REPORT_TIME_OUT_DEFAULT =
-      TimeUnit.SECONDS.toMillis(1);
-
-  /** Time to cache the DN information. */
-  public static final String DN_REPORT_CACHE_EXPIRE =
-      RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.cache-expire";
-  /** We cache the DN information for 10 seconds by default. */
-  public static final long DN_REPORT_CACHE_EXPIRE_DEFAULT =
-      TimeUnit.SECONDS.toMillis(10);
-
-
   /** Instance of the Router being monitored. */
   private final Router router;
 
@@ -148,10 +133,11 @@ public class NamenodeBeanMetrics
     // Initialize the cache for the DN reports
     Configuration conf = router.getConfig();
     this.dnReportTimeOut = conf.getTimeDuration(
-        DN_REPORT_TIME_OUT, DN_REPORT_TIME_OUT_DEFAULT, TimeUnit.MILLISECONDS);
+        RBFConfigKeys.DN_REPORT_TIME_OUT,
+        RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
     long dnCacheExpire = conf.getTimeDuration(
-        DN_REPORT_CACHE_EXPIRE,
-        DN_REPORT_CACHE_EXPIRE_DEFAULT, TimeUnit.MILLISECONDS);
+        RBFConfigKeys.DN_REPORT_CACHE_EXPIRE,
+        RBFConfigKeys.DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS);
     this.dnCache = CacheBuilder.newBuilder()
         .expireAfterWrite(dnCacheExpire, TimeUnit.MILLISECONDS)
         .build(
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index fa474f4..dd72e36 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -233,6 +233,13 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "https-bind-host";
   public static final String DFS_ROUTER_HTTPS_ADDRESS_DEFAULT =
       "0.0.0.0:" + DFS_ROUTER_HTTPS_PORT_DEFAULT;
+  public static final String DN_REPORT_TIME_OUT =
+      FEDERATION_ROUTER_PREFIX + "dn-report.time-out";
+  public static final long  DN_REPORT_TIME_OUT_MS_DEFAULT = 1000;
+  public static final String DN_REPORT_CACHE_EXPIRE =
+      FEDERATION_ROUTER_PREFIX + "dn-report.cache-expire";
+  public static final long DN_REPORT_CACHE_EXPIRE_MS_DEFAULT =
+      TimeUnit.SECONDS.toMillis(10);
 
   // HDFS Router-based federation quota
   public static final String DFS_ROUTER_QUOTA_ENABLE =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 29c3093..53bf53a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -143,6 +143,23 @@
     </description>
   </property>
 
+
+  <property>
+    <name>dfs.federation.router.dn-report.time-out</name>
+    <value>1000</value>
+    <description>
+      Time out, in milliseconds for getDatanodeReport.
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.dn-report.cache-expire</name>
+    <value>10s</value>
+    <description>
+      Expiration time in seconds for datanodereport.
+    </description>
+  </property>
+
   <property>
     <name>dfs.federation.router.metrics.class</name>
     <value>org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor</value>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
index e5ab3ab..f84e9a0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
@@ -81,7 +81,7 @@ public class TestRouterRPCClientRetries {
         .rpc()
         .build();
     routerConf.setTimeDuration(
-        NamenodeBeanMetrics.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
+        RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
 
     // reduce IPC client connection retry times and interval time
     Configuration clientConf = new Configuration(false);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index a32cba1..204366e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -175,7 +175,7 @@ public class TestRouterRpc {
         .build();
     // We decrease the DN cache times to make the test faster
     routerConf.setTimeDuration(
-        NamenodeBeanMetrics.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
+        RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
     cluster.addRouterOverrides(routerConf);
     cluster.startRouters();
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 04/41: HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dde38f7d93badf9fceeba3f07b4a792c07a6ca52
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Thu Nov 1 11:49:33 2018 -0700

    HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota.
---
 .../hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index a1adf77..1349aa3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -351,7 +351,7 @@ public class NamenodeHeartbeatService extends PeriodicService {
                 jsonObject.getLong("PendingReplicationBlocks"),
                 jsonObject.getLong("UnderReplicatedBlocks"),
                 jsonObject.getLong("PendingDeletionBlocks"),
-                jsonObject.getLong("ProvidedCapacityTotal"));
+                jsonObject.optLong("ProvidedCapacityTotal"));
           }
         }
       }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 23/41: HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c30d4d9f915b6aebf609bf42121586e73f34c4e5
Author: Surendra Singh Lilhore <su...@apache.org>
AuthorDate: Thu Jan 10 16:18:23 2019 +0530

    HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar.
---
 .../federation/metrics/FederationMetrics.java      |  2 +-
 .../federation/metrics/NamenodeBeanMetrics.java    | 25 +++++++++++++++-
 .../hdfs/server/federation/router/Router.java      |  7 +++++
 .../src/main/webapps/router/federationhealth.js    |  2 +-
 .../federation/router/TestRouterAdminCLI.java      | 33 +++++++++++++++++++++-
 5 files changed, 65 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index b3fe6cc..c66910c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -582,7 +582,7 @@ public class FederationMetrics implements FederationMBean {
 
   @Override
   public String getRouterStatus() {
-    return "RUNNING";
+    return this.router.getRouterState().toString();
   }
 
   /**
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 5e95606..963c6c2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -45,6 +45,7 @@ import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
+import org.apache.hadoop.hdfs.server.federation.router.RouterServiceState;
 import org.apache.hadoop.hdfs.server.federation.router.SubClusterTimeoutException;
 import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
@@ -232,7 +233,29 @@ public class NamenodeBeanMetrics
 
   @Override
   public String getSafemode() {
-    // We assume that the global federated view is never in safe mode
+    try {
+      if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
+        return "Safe mode is ON. " + this.getSafeModeTip();
+      }
+    } catch (IOException e) {
+      return "Failed to get safemode status. Please check router"
+          + "log for more detail.";
+    }
+    return "";
+  }
+
+  private String getSafeModeTip() throws IOException {
+    Router rt = getRouter();
+    String cmd = "Use \"hdfs dfsrouteradmin -safemode leave\" "
+        + "to turn safe mode off.";
+    if (rt.isRouterState(RouterServiceState.INITIALIZING)
+        || rt.isRouterState(RouterServiceState.UNINITIALIZED)) {
+      return "Router is in" + rt.getRouterState()
+          + "mode, the router will immediately return to "
+          + "normal mode after some time. " + cmd;
+    } else if (rt.isRouterState(RouterServiceState.SAFEMODE)) {
+      return "It was turned on manually. " + cmd;
+    }
     return "";
   }
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 6a7437f..0257162 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -585,6 +585,13 @@ public class Router extends CompositeService {
     return this.state;
   }
 
+  /**
+   * Compare router state.
+   */
+  public boolean isRouterState(RouterServiceState routerState) {
+    return routerState.equals(this.state);
+  }
+
   /////////////////////////////////////////////////////////
   // Submodule getters
   /////////////////////////////////////////////////////////
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
index bb8e057..5da7b07 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
@@ -35,7 +35,7 @@
     var BEANS = [
       {"name": "federation",  "url": "/jmx?qry=Hadoop:service=Router,name=FederationState"},
       {"name": "routerstat",  "url": "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},
-      {"name": "router",      "url": "/jmx?qrt=Hadoop:service=NameNode,name=NameNodeInfo"},
+      {"name": "router",      "url": "/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo"},
       {"name": "mem",         "url": "/jmx?qry=java.lang:type=Memory"}
     ];
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 445022b..ab733dd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext;
 import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
 import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
@@ -66,6 +67,7 @@ public class TestRouterAdminCLI {
 
   private static RouterAdmin admin;
   private static RouterClient client;
+  private static Router router;
 
   private static final String TEST_USER = "test-user";
 
@@ -80,6 +82,7 @@ public class TestRouterAdminCLI {
     // Build and start a router with State Store + admin + RPC
     Configuration conf = new RouterConfigBuilder()
         .stateStore()
+        .metrics()
         .admin()
         .rpc()
         .safemode()
@@ -90,7 +93,7 @@ public class TestRouterAdminCLI {
     cluster.startRouters();
 
     routerContext = cluster.getRandomRouter();
-    Router router = routerContext.getRouter();
+    router = routerContext.getRouter();
     stateStore = router.getStateStore();
 
     Configuration routerConf = new Configuration();
@@ -721,6 +724,34 @@ public class TestRouterAdminCLI {
   }
 
   @Test
+  public void testSafeModeStatus() throws Exception {
+    // ensure the Router become RUNNING state
+    waitState(RouterServiceState.RUNNING);
+    assertFalse(routerContext.getRouter().getSafemodeService().isInSafeMode());
+    assertEquals(0,
+        ToolRunner.run(admin, new String[] {"-safemode", "enter" }));
+
+    FederationMetrics metrics = router.getMetrics();
+    String jsonString = metrics.getRouterStatus();
+
+    // verify state using FederationMetrics
+    assertEquals(RouterServiceState.SAFEMODE.toString(), jsonString);
+    assertTrue(routerContext.getRouter().getSafemodeService().isInSafeMode());
+
+    System.setOut(new PrintStream(out));
+    assertEquals(0,
+        ToolRunner.run(admin, new String[] {"-safemode", "leave" }));
+    jsonString = metrics.getRouterStatus();
+    // verify state
+    assertEquals(RouterServiceState.RUNNING.toString(), jsonString);
+    assertFalse(routerContext.getRouter().getSafemodeService().isInSafeMode());
+
+    out.reset();
+    assertEquals(0, ToolRunner.run(admin, new String[] {"-safemode", "get" }));
+    assertTrue(out.toString().contains("false"));
+  }
+
+  @Test
   public void testCreateInvalidEntry() throws Exception {
     String[] argv = new String[] {
         "-add", "test-createInvalidEntry", "ns0", "/createInvalidEntry"};


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 18/41: HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 640fe0789f08afb6030c4c8940ae01a2599f22f3
Author: Takanobu Asanuma <ta...@apache.org>
AuthorDate: Tue Dec 18 19:47:36 2018 +0900

    HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable.
---
 .../hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html | 2 +-
 .../hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js   | 1 +
 .../hadoop-hdfs-rbf/src/main/webapps/static/rbf.css               | 8 +-------
 3 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
index 068988c..0f089fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
@@ -408,7 +408,7 @@
       <td>{nameserviceId}</td>
       <td>{path}</td>
       <td>{order}</td>
-      <td class="mount-table-icon mount-table-read-only-{readonly}"/>
+      <td align="center" class="mount-table-icon mount-table-read-only-{readonly}" title="{status}"/>
       <td>{ownerName}</td>
       <td>{groupName}</td>
       <td>{mode}</td>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
index 6311a80..bb8e057 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
@@ -317,6 +317,7 @@
         for (var i = 0, e = mountTable.length; i < e; ++i) {
           if (mountTable[i].readonly == true) {
             mountTable[i].readonly = "true"
+            mountTable[i].status = "Read only"
           } else {
             mountTable[i].readonly = "false"
           }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
index 43112af..5cdd826 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
@@ -132,12 +132,6 @@
 }
 
 .mount-table-read-only-true:before {
-    color: #c7254e;
-    content: "\e033";
-}
-
-.mount-table-read-only-false:before {
     color: #5fa341;
-    content: "\e013";
+    content: "\e033";
 }
-


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 28/41: HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6c9c040688cc4f69a036bbfe91a5c54fe72dc98d
Author: Vinayakumar B <vi...@apache.org>
AuthorDate: Wed Jan 16 18:06:17 2019 +0530

    HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena.
---
 .../federation/resolver/MountTableResolver.java    | 27 ++++--------------
 .../resolver/TestInitializeMountTableResolver.java | 32 +++++++---------------
 2 files changed, 16 insertions(+), 43 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 9e69840..da58551 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.hdfs.server.federation.resolver;
 
-import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
-import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMESERVICE_ID;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT;
@@ -50,8 +48,6 @@ import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.DFSUtil;
-import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
@@ -163,33 +159,22 @@ public class MountTableResolver
    * @param conf Configuration for this resolver.
    */
   private void initDefaultNameService(Configuration conf) {
-    this.defaultNameService = conf.get(
-        DFS_ROUTER_DEFAULT_NAMESERVICE,
-        DFSUtil.getNamenodeNameServiceId(conf));
-
     this.defaultNSEnable = conf.getBoolean(
         DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE,
         DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT);
 
-    if (defaultNameService == null) {
-      LOG.warn(
-          "{} and {} is not set. Fallback to {} as the default name service.",
-          DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
-      Collection<String> nsIds = DFSUtilClient.getNameServiceIds(conf);
-      if (nsIds.isEmpty()) {
-        this.defaultNameService = "";
-      } else {
-        this.defaultNameService = nsIds.iterator().next();
-      }
+    if (!this.defaultNSEnable) {
+      LOG.warn("Default name service is disabled.");
+      return;
     }
+    this.defaultNameService = conf.get(DFS_ROUTER_DEFAULT_NAMESERVICE, "");
 
     if (this.defaultNameService.equals("")) {
       this.defaultNSEnable = false;
       LOG.warn("Default name service is not set.");
     } else {
-      String enable = this.defaultNSEnable ? "enabled" : "disabled";
-      LOG.info("Default name service: {}, {} to read or write",
-          this.defaultNameService, enable);
+      LOG.info("Default name service: {}, enabled to read or write",
+          this.defaultNameService);
     }
   }
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
index 5db7531..8a22ade 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
@@ -23,7 +23,9 @@ import org.junit.Test;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
 import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
 
 /**
  * Test {@link MountTableResolver} initialization.
@@ -43,40 +45,26 @@ public class TestInitializeMountTableResolver {
     conf.set(DFS_ROUTER_DEFAULT_NAMESERVICE, "");
     MountTableResolver mountTable = new MountTableResolver(conf);
     assertEquals("", mountTable.getDefaultNamespace());
+    assertFalse("Default NS should be disabled if default NS is set empty",
+        mountTable.isDefaultNSEnable());
   }
 
   @Test
   public void testRouterDefaultNameservice() {
     Configuration conf = new Configuration();
-    conf.set(DFS_ROUTER_DEFAULT_NAMESERVICE, "router_ns"); // this is priority
-    conf.set(DFS_NAMESERVICE_ID, "ns_id");
-    conf.set(DFS_NAMESERVICES, "nss");
+    conf.set(DFS_ROUTER_DEFAULT_NAMESERVICE, "router_ns");
     MountTableResolver mountTable = new MountTableResolver(conf);
     assertEquals("router_ns", mountTable.getDefaultNamespace());
   }
 
+  // Default NS should be empty if configured false.
   @Test
-  public void testNameserviceID() {
+  public void testRouterDefaultNameserviceDisabled() {
     Configuration conf = new Configuration();
-    conf.set(DFS_NAMESERVICE_ID, "ns_id"); // this is priority
+    conf.setBoolean(DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE, false);
+    conf.set(DFS_NAMESERVICE_ID, "ns_id");
     conf.set(DFS_NAMESERVICES, "nss");
     MountTableResolver mountTable = new MountTableResolver(conf);
-    assertEquals("ns_id", mountTable.getDefaultNamespace());
-  }
-
-  @Test
-  public void testSingleNameservices() {
-    Configuration conf = new Configuration();
-    conf.set(DFS_NAMESERVICES, "ns1");
-    MountTableResolver mountTable = new MountTableResolver(conf);
-    assertEquals("ns1", mountTable.getDefaultNamespace());
-  }
-
-  @Test
-  public void testMultipleNameservices() {
-    Configuration conf = new Configuration();
-    conf.set(DFS_NAMESERVICES, "ns1,ns2");
-    MountTableResolver mountTable = new MountTableResolver(conf);
-    assertEquals("ns1", mountTable.getDefaultNamespace());
+    assertEquals("", mountTable.getDefaultNamespace());
   }
 }
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 08/41: HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8fe8161805843f6e6de13343d41b404d34217657
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Wed Nov 14 18:35:12 2018 +0530

    HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota.
---
 .../federation/router/ConnectionManager.java       |  4 +-
 .../federation/router/TestConnectionManager.java   | 43 ++++++++++++++++++++++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 9fb83e4..fa2bf94 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -393,7 +393,7 @@ public class ConnectionManager {
   /**
    * Thread that creates connections asynchronously.
    */
-  private static class ConnectionCreator extends Thread {
+  static class ConnectionCreator extends Thread {
     /** If the creator is running. */
     private boolean running = true;
     /** Queue to push work to. */
@@ -426,6 +426,8 @@ public class ConnectionManager {
         } catch (InterruptedException e) {
           LOG.error("The connection creator was interrupted");
           this.running = false;
+        } catch (Throwable e) {
+          LOG.error("Fatal error caught by connection creator ", e);
         }
       }
     }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
index 0e1eb40..765f6c8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
@@ -22,12 +22,17 @@ import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.Rule;
+import org.junit.rules.ExpectedException;
 
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -49,6 +54,7 @@ public class TestConnectionManager {
   private static final UserGroupInformation TEST_USER3 =
       UserGroupInformation.createUserForTesting("user3", TEST_GROUP);
   private static final String TEST_NN_ADDRESS = "nn1:8080";
+  private static final String UNRESOLVED_TEST_NN_ADDRESS = "unknownhost:8080";
 
   @Before
   public void setup() throws Exception {
@@ -59,6 +65,9 @@ public class TestConnectionManager {
     connManager.start();
   }
 
+  @Rule
+  public ExpectedException exceptionRule = ExpectedException.none();
+
   @After
   public void shutdown() {
     if (connManager != null) {
@@ -122,6 +131,40 @@ public class TestConnectionManager {
   }
 
   @Test
+  public void testConnectionCreatorWithException() throws Exception {
+    // Create a bad connection pool pointing to unresolvable namenode address.
+    ConnectionPool badPool = new ConnectionPool(
+            conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 0, 10,
+            ClientProtocol.class);
+    BlockingQueue<ConnectionPool> queue = new ArrayBlockingQueue<>(1);
+    queue.add(badPool);
+    ConnectionManager.ConnectionCreator connectionCreator =
+        new ConnectionManager.ConnectionCreator(queue);
+    connectionCreator.setDaemon(true);
+    connectionCreator.start();
+    // Wait to make sure async thread is scheduled and picks
+    GenericTestUtils.waitFor(()->queue.isEmpty(), 50, 5000);
+    // At this point connection creation task should be definitely picked up.
+    assertTrue(queue.isEmpty());
+    // At this point connection thread should still be alive.
+    assertTrue(connectionCreator.isAlive());
+    // Stop the thread as test is successful at this point
+    connectionCreator.interrupt();
+  }
+
+  @Test
+  public void testGetConnectionWithException() throws Exception {
+    String exceptionCause = "java.net.UnknownHostException: unknownhost";
+    exceptionRule.expect(IllegalArgumentException.class);
+    exceptionRule.expectMessage(exceptionCause);
+
+    // Create a bad connection pool pointing to unresolvable namenode address.
+    ConnectionPool badPool = new ConnectionPool(
+        conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 1, 10,
+        ClientProtocol.class);
+  }
+
+  @Test
   public void testGetConnection() throws Exception {
     Map<ConnectionPoolId, ConnectionPool> poolMap = connManager.getPools();
     final int totalConns = 10;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 38/41: HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5f5ba94c80270c97e762da2cecf9e150cf7e4527
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Thu Feb 14 08:16:45 2019 +0530

    HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.
---
 .../server/federation/router/RBFConfigKeys.java    |   9 +
 .../federation/router/RouterClientProtocol.java    |  16 +-
 .../server/federation/router/RouterRpcServer.java  |  21 +-
 .../router/security/RouterSecurityManager.java     | 239 +++++++++++++++++++++
 .../federation/router/security/package-info.java   |  28 +++
 .../token/ZKDelegationTokenSecretManagerImpl.java  |  56 +++++
 .../router/security/token/package-info.java        |  29 +++
 .../src/main/resources/hdfs-rbf-default.xml        |  11 +-
 .../fs/contract/router/SecurityConfUtil.java       |   4 +
 .../TestRouterHDFSContractDelegationToken.java     | 101 +++++++++
 .../security/MockDelegationTokenSecretManager.java |  52 +++++
 .../security/TestRouterSecurityManager.java        |  93 ++++++++
 12 files changed, 652 insertions(+), 7 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 5e907c8..657b6cf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -28,6 +28,8 @@ import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl;
+import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl;
 
 import java.util.concurrent.TimeUnit;
 
@@ -294,4 +296,11 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
 
   public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY =
       FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
+
+  // HDFS Router secret manager for delegation token
+  public static final String DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS =
+      FEDERATION_ROUTER_PREFIX + "secret.manager.class";
+  public static final Class<? extends AbstractDelegationTokenSecretManager>
+      DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT =
+      ZKDelegationTokenSecretManagerImpl.class;
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index f20b4b6..5383a7d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -77,6 +77,7 @@ import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport;
@@ -124,6 +125,8 @@ public class RouterClientProtocol implements ClientProtocol {
   private final ErasureCoding erasureCoding;
   /** StoragePolicy calls. **/
   private final RouterStoragePolicy storagePolicy;
+  /** Router security manager to handle token operations. */
+  private RouterSecurityManager securityManager = null;
 
   RouterClientProtocol(Configuration conf, RouterRpcServer rpcServer) {
     this.rpcServer = rpcServer;
@@ -142,13 +145,14 @@ public class RouterClientProtocol implements ClientProtocol {
         DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
     this.erasureCoding = new ErasureCoding(rpcServer);
     this.storagePolicy = new RouterStoragePolicy(rpcServer);
+    this.securityManager = rpcServer.getRouterSecurityManager();
   }
 
   @Override
   public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
       throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
-    return null;
+    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
+    return this.securityManager.getDelegationToken(renewer);
   }
 
   /**
@@ -167,14 +171,16 @@ public class RouterClientProtocol implements ClientProtocol {
   @Override
   public long renewDelegationToken(Token<DelegationTokenIdentifier> token)
       throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
-    return 0;
+    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
+    return this.securityManager.renewDelegationToken(token);
   }
 
   @Override
   public void cancelDelegationToken(Token<DelegationTokenIdentifier> token)
       throws IOException {
-    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
+    rpcServer.checkOperation(NameNode.OperationCategory.WRITE, true);
+    this.securityManager.cancelDelegationToken(token);
+    return;
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index be6a9b0..a312d4b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -114,6 +114,7 @@ import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.PathLocation;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
 import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
 import org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException;
 import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
@@ -197,6 +198,8 @@ public class RouterRpcServer extends AbstractService
   private final RouterNamenodeProtocol nnProto;
   /** ClientProtocol calls. */
   private final RouterClientProtocol clientProto;
+  /** Router security manager to handle token operations. */
+  private RouterSecurityManager securityManager = null;
 
   /**
    * Construct a router RPC server.
@@ -256,6 +259,9 @@ public class RouterRpcServer extends AbstractService
     LOG.info("RPC server binding to {} with {} handlers for Router {}",
         confRpcAddress, handlerCount, this.router.getRouterId());
 
+    // Create security manager
+    this.securityManager = new RouterSecurityManager(this.conf);
+
     this.rpcServer = new RPC.Builder(this.conf)
         .setProtocol(ClientNamenodeProtocolPB.class)
         .setInstance(clientNNPbService)
@@ -265,6 +271,7 @@ public class RouterRpcServer extends AbstractService
         .setnumReaders(readerCount)
         .setQueueSizePerHandler(handlerQueueSize)
         .setVerbose(false)
+        .setSecretManager(this.securityManager.getSecretManager())
         .build();
 
     // Add all the RPC protocols that the Router implements
@@ -344,10 +351,22 @@ public class RouterRpcServer extends AbstractService
     if (rpcMonitor != null) {
       this.rpcMonitor.close();
     }
+    if (securityManager != null) {
+      this.securityManager.stop();
+    }
     super.serviceStop();
   }
 
   /**
+   * Get the RPC security manager.
+   *
+   * @return RPC security manager.
+   */
+  public RouterSecurityManager getRouterSecurityManager() {
+    return this.securityManager;
+  }
+
+  /**
    * Get the RPC client to the Namenode.
    *
    * @return RPC clients to the Namenodes.
@@ -1457,7 +1476,7 @@ public class RouterRpcServer extends AbstractService
    * @return Remote user group information.
    * @throws IOException If we cannot get the user information.
    */
-  static UserGroupInformation getRemoteUser() throws IOException {
+  public static UserGroupInformation getRemoteUser() throws IOException {
     UserGroupInformation ugi = Server.getRemoteUser();
     return (ugi != null) ? ugi : UserGroupInformation.getCurrentUser();
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/RouterSecurityManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/RouterSecurityManager.java
new file mode 100644
index 0000000..0f0089a
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/RouterSecurityManager.java
@@ -0,0 +1,239 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router.security;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
+import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
+import org.apache.hadoop.security.token.SecretManager;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+
+/**
+ * Manager to hold underlying delegation token secret manager implementations.
+ */
+public class RouterSecurityManager {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(RouterSecurityManager.class);
+
+  private AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
+      dtSecretManager = null;
+
+  public RouterSecurityManager(Configuration conf) {
+    this.dtSecretManager = newSecretManager(conf);
+  }
+
+  @VisibleForTesting
+  public RouterSecurityManager(AbstractDelegationTokenSecretManager
+      <DelegationTokenIdentifier> dtSecretManager) {
+    this.dtSecretManager = dtSecretManager;
+  }
+
+  /**
+   * Creates an instance of a SecretManager from the configuration.
+   *
+   * @param conf Configuration that defines the secret manager class.
+   * @return New secret manager.
+   */
+  public static AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
+      newSecretManager(Configuration conf) {
+    Class<? extends AbstractDelegationTokenSecretManager> clazz =
+        conf.getClass(
+        RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS,
+        RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT,
+        AbstractDelegationTokenSecretManager.class);
+    AbstractDelegationTokenSecretManager secretManager;
+    try {
+      Constructor constructor = clazz.getConstructor(Configuration.class);
+      secretManager = (AbstractDelegationTokenSecretManager)
+          constructor.newInstance(conf);
+      LOG.info("Delegation token secret manager object instantiated");
+    } catch (ReflectiveOperationException e) {
+      LOG.error("Could not instantiate: {}", clazz.getSimpleName(), e);
+      return null;
+    } catch (RuntimeException e) {
+      LOG.error("RuntimeException to instantiate: {}",
+          clazz.getSimpleName(), e);
+      return null;
+    }
+    return secretManager;
+  }
+
+  public AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
+      getSecretManager() {
+    return this.dtSecretManager;
+  }
+
+  public void stop() {
+    LOG.info("Stopping security manager");
+    if(this.dtSecretManager != null) {
+      this.dtSecretManager.stopThreads();
+    }
+  }
+
+  private static UserGroupInformation getRemoteUser() throws IOException {
+    return RouterRpcServer.getRemoteUser();
+  }
+  /**
+   * Returns authentication method used to establish the connection.
+   * @return AuthenticationMethod used to establish connection.
+   * @throws IOException
+   */
+  private UserGroupInformation.AuthenticationMethod
+      getConnectionAuthenticationMethod() throws IOException {
+    UserGroupInformation ugi = getRemoteUser();
+    UserGroupInformation.AuthenticationMethod authMethod
+        = ugi.getAuthenticationMethod();
+    if (authMethod == UserGroupInformation.AuthenticationMethod.PROXY) {
+      authMethod = ugi.getRealUser().getAuthenticationMethod();
+    }
+    return authMethod;
+  }
+
+  /**
+   *
+   * @return true if delegation token operation is allowed
+   */
+  private boolean isAllowedDelegationTokenOp() throws IOException {
+    AuthenticationMethod authMethod = getConnectionAuthenticationMethod();
+    if (UserGroupInformation.isSecurityEnabled()
+        && (authMethod != AuthenticationMethod.KERBEROS)
+        && (authMethod != AuthenticationMethod.KERBEROS_SSL)
+        && (authMethod != AuthenticationMethod.CERTIFICATE)) {
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * @param renewer Renewer information
+   * @return delegation token
+   * @throws IOException on error
+   */
+  public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
+      throws IOException {
+    LOG.debug("Generate delegation token with renewer " + renewer);
+    final String operationName = "getDelegationToken";
+    boolean success = false;
+    String tokenId = "";
+    Token<DelegationTokenIdentifier> token;
+    try {
+      if (!isAllowedDelegationTokenOp()) {
+        throw new IOException(
+            "Delegation Token can be issued only " +
+                "with kerberos or web authentication");
+      }
+      if (dtSecretManager == null || !dtSecretManager.isRunning()) {
+        LOG.warn("trying to get DT with no secret manager running");
+        return null;
+      }
+      UserGroupInformation ugi = getRemoteUser();
+      String user = ugi.getUserName();
+      Text owner = new Text(user);
+      Text realUser = null;
+      if (ugi.getRealUser() != null) {
+        realUser = new Text(ugi.getRealUser().getUserName());
+      }
+      DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(owner,
+          renewer, realUser);
+      token = new Token<DelegationTokenIdentifier>(
+          dtId, dtSecretManager);
+      tokenId = dtId.toStringStable();
+      success = true;
+    } finally {
+      logAuditEvent(success, operationName, tokenId);
+    }
+    return token;
+  }
+
+  public long renewDelegationToken(Token<DelegationTokenIdentifier> token)
+          throws SecretManager.InvalidToken, IOException {
+    LOG.debug("Renew delegation token");
+    final String operationName = "renewDelegationToken";
+    boolean success = false;
+    String tokenId = "";
+    long expiryTime;
+    try {
+      if (!isAllowedDelegationTokenOp()) {
+        throw new IOException(
+            "Delegation Token can be renewed only " +
+                "with kerberos or web authentication");
+      }
+      String renewer = getRemoteUser().getShortUserName();
+      expiryTime = dtSecretManager.renewToken(token, renewer);
+      final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
+      tokenId = id.toStringStable();
+      success = true;
+    } catch (AccessControlException ace) {
+      final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
+      tokenId = id.toStringStable();
+      throw ace;
+    } finally {
+      logAuditEvent(success, operationName, tokenId);
+    }
+    return expiryTime;
+  }
+
+  public void cancelDelegationToken(Token<DelegationTokenIdentifier> token)
+          throws IOException {
+    LOG.debug("Cancel delegation token");
+    final String operationName = "cancelDelegationToken";
+    boolean success = false;
+    String tokenId = "";
+    try {
+      String canceller = getRemoteUser().getUserName();
+      LOG.info("Cancel request by " + canceller);
+      DelegationTokenIdentifier id =
+          dtSecretManager.cancelToken(token, canceller);
+      tokenId = id.toStringStable();
+      success = true;
+    } catch (AccessControlException ace) {
+      final DelegationTokenIdentifier id = DFSUtil.decodeDelegationToken(token);
+      tokenId = id.toStringStable();
+      throw ace;
+    } finally {
+      logAuditEvent(success, operationName, tokenId);
+    }
+  }
+
+  /**
+   * Log status of delegation token related operation.
+   * Extend in future to use audit logger instead of local logging.
+   */
+  void logAuditEvent(boolean succeeded, String cmd, String tokenId)
+      throws IOException {
+    LOG.debug(
+        "Operation:" + cmd +
+        " Status:" + succeeded +
+        " TokenId:" + tokenId);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/package-info.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/package-info.java
new file mode 100644
index 0000000..9dd12ec
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/package-info.java
@@ -0,0 +1,28 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Includes router security manager and token store implementations.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+
+package org.apache.hadoop.hdfs.server.federation.router.security;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java
new file mode 100644
index 0000000..3da63f8
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/ZKDelegationTokenSecretManagerImpl.java
@@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router.security.token;
+
+import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
+import org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+/**
+ * Zookeeper based router delegation token store implementation.
+ */
+public class ZKDelegationTokenSecretManagerImpl extends
+    ZKDelegationTokenSecretManager<AbstractDelegationTokenIdentifier> {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ZKDelegationTokenSecretManagerImpl.class);
+
+  private Configuration conf = null;
+
+  public ZKDelegationTokenSecretManagerImpl(Configuration conf) {
+    super(conf);
+    this.conf = conf;
+    try {
+      super.startThreads();
+    } catch (IOException e) {
+      LOG.error("Error starting threads for zkDelegationTokens ");
+    }
+    LOG.info("Zookeeper delegation token secret manager instantiated");
+  }
+
+  @Override
+  public DelegationTokenIdentifier createIdentifier() {
+    return new DelegationTokenIdentifier();
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java
new file mode 100644
index 0000000..a51e455
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Includes implementations of token secret managers.
+ * Implementations should extend {@link AbstractDelegationTokenSecretManager}.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+
+package org.apache.hadoop.hdfs.server.federation.router.security.token;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index afe3ad1..1034c87 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -584,4 +584,13 @@
     </description>
   </property>
 
-</configuration>
\ No newline at end of file
+  <property>
+    <name>dfs.federation.router.secret.manager.class</name>
+    <value>org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl</value>
+    <description>
+      Class to implement state store to delegation tokens.
+      Default implementation uses zookeeper as the backend to store delegation tokens.
+    </description>
+  </property>
+
+</configuration>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
index 100313e..d6ee3c7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
@@ -31,6 +31,7 @@ import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_RPC_BIND_HOST_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS;
 import static org.junit.Assert.assertTrue;
 
 import java.io.File;
@@ -43,6 +44,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl;
+import org.apache.hadoop.hdfs.server.federation.security.MockDelegationTokenSecretManager;
 import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.SecurityUtil;
@@ -144,6 +146,8 @@ public final class SecurityConfUtil {
 
     // We need to specify the host to prevent 0.0.0.0 as the host address
     conf.set(DFS_ROUTER_RPC_BIND_HOST_KEY, "localhost");
+    conf.set(DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS,
+        MockDelegationTokenSecretManager.class.getName());
 
     return conf;
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDelegationToken.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDelegationToken.java
new file mode 100644
index 0000000..e4c03e4
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDelegationToken.java
@@ -0,0 +1,101 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.AbstractFSContractTestBase;
+import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.security.token.SecretManager;
+import org.apache.hadoop.security.token.Token;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.io.IOException;
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+/**
+ * Test to verify router contracts for delegation token operations.
+ */
+public class TestRouterHDFSContractDelegationToken
+    extends AbstractFSContractTestBase {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+
+  @Rule
+  public ExpectedException exceptionRule = ExpectedException.none();
+
+  @Test
+  public void testRouterDelegationToken() throws Exception {
+    // Generate delegation token
+    Token<DelegationTokenIdentifier> token =
+        (Token<DelegationTokenIdentifier>) getFileSystem()
+        .getDelegationToken("router");
+    assertNotNull(token);
+    // Verify properties of the token
+    assertEquals("HDFS_DELEGATION_TOKEN", token.getKind().toString());
+    DelegationTokenIdentifier identifier = token.decodeIdentifier();
+    assertNotNull(identifier);
+    String owner = identifier.getOwner().toString();
+    // Windows will not reverse name lookup "127.0.0.1" to "localhost".
+    String host = Path.WINDOWS ? "127.0.0.1" : "localhost";
+    String expectedOwner = "router/"+ host + "@EXAMPLE.COM";
+    assertEquals(expectedOwner, owner);
+    assertEquals("router", identifier.getRenewer().toString());
+    int masterKeyId = identifier.getMasterKeyId();
+    assertTrue(masterKeyId > 0);
+    int sequenceNumber = identifier.getSequenceNumber();
+    assertTrue(sequenceNumber > 0);
+    long existingMaxTime = token.decodeIdentifier().getMaxDate();
+    assertTrue(identifier.getMaxDate() >= identifier.getIssueDate());
+
+    // Renew delegation token
+    token.renew(initSecurity());
+    assertNotNull(token);
+    assertTrue(token.decodeIdentifier().getMaxDate() >= existingMaxTime);
+    // Renewal should retain old master key id and sequence number
+    identifier = token.decodeIdentifier();
+    assertEquals(identifier.getMasterKeyId(), masterKeyId);
+    assertEquals(identifier.getSequenceNumber(), sequenceNumber);
+
+    // Cancel delegation token
+    token.cancel(initSecurity());
+
+    // Renew a cancelled token
+    exceptionRule.expect(SecretManager.InvalidToken.class);
+    token.renew(initSecurity());
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/MockDelegationTokenSecretManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/MockDelegationTokenSecretManager.java
new file mode 100644
index 0000000..8f89f0a
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/MockDelegationTokenSecretManager.java
@@ -0,0 +1,52 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.security;
+
+import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.apache.hadoop.conf.Configuration;
+import java.io.IOException;
+
+/**
+ * Mock functionality of AbstractDelegationTokenSecretManager.
+ * for testing
+ */
+public class MockDelegationTokenSecretManager
+    extends AbstractDelegationTokenSecretManager<DelegationTokenIdentifier> {
+
+  public MockDelegationTokenSecretManager(
+      long delegationKeyUpdateInterval,
+      long delegationTokenMaxLifetime,
+      long delegationTokenRenewInterval,
+      long delegationTokenRemoverScanInterval) {
+    super(delegationKeyUpdateInterval, delegationTokenMaxLifetime,
+        delegationTokenRenewInterval, delegationTokenRemoverScanInterval);
+  }
+
+  public MockDelegationTokenSecretManager(Configuration conf)
+      throws IOException {
+    super(100000, 100000, 100000, 100000);
+    this.startThreads();
+  }
+
+  @Override
+  public DelegationTokenIdentifier createIdentifier() {
+    return new DelegationTokenIdentifier();
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
new file mode 100644
index 0000000..fe6e0ee
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/security/TestRouterSecurityManager.java
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.security;
+
+import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.SecretManager;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.junit.rules.ExpectedException;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNotNull;
+
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Test functionality of {@link RouterSecurityManager}, which manages
+ * delegation tokens for router.
+ */
+public class TestRouterSecurityManager {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestRouterSecurityManager.class);
+
+  private static RouterSecurityManager securityManager = null;
+
+  @BeforeClass
+  public static void createMockSecretManager() throws IOException {
+    AbstractDelegationTokenSecretManager<DelegationTokenIdentifier>
+        mockDelegationTokenSecretManager =
+        new MockDelegationTokenSecretManager(100, 100, 100, 100);
+    mockDelegationTokenSecretManager.startThreads();
+    securityManager =
+        new RouterSecurityManager(mockDelegationTokenSecretManager);
+  }
+
+  @Rule
+  public ExpectedException exceptionRule = ExpectedException.none();
+
+  @Test
+  public void testDelegationTokens() throws IOException {
+    String[] groupsForTesting = new String[1];
+    groupsForTesting[0] = "router_group";
+    UserGroupInformation.setLoginUser(UserGroupInformation
+        .createUserForTesting("router", groupsForTesting));
+
+    // Get a delegation token
+    Token<DelegationTokenIdentifier> token =
+        securityManager.getDelegationToken(new Text("some_renewer"));
+    assertNotNull(token);
+
+    // Renew the delegation token
+    UserGroupInformation.setLoginUser(UserGroupInformation
+        .createUserForTesting("some_renewer", groupsForTesting));
+    long updatedExpirationTime = securityManager.renewDelegationToken(token);
+    assertTrue(updatedExpirationTime >= token.decodeIdentifier().getMaxDate());
+
+    // Cancel the delegation token
+    securityManager.cancelDelegationToken(token);
+
+    String exceptionCause = "Renewal request for unknown token";
+    exceptionRule.expect(SecretManager.InvalidToken.class);
+    exceptionRule.expectMessage(exceptionCause);
+
+    // This throws an exception as token has been cancelled.
+    securityManager.renewDelegationToken(token);
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 20/41: HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3b971fe4d1e63fbf262be403a9df93e771b19c44
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Wed Jan 2 10:38:33 2019 -0800

    HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri.
---
 .../server/federation/metrics/FederationMBean.java     |  6 ++++++
 .../server/federation/metrics/FederationMetrics.java   |  6 ++++++
 .../server/federation/metrics/NamenodeBeanMetrics.java |  7 ++++++-
 .../resolver/MembershipNamenodeResolver.java           |  1 +
 .../federation/resolver/NamenodeStatusReport.java      | 18 +++++++++++++++---
 .../federation/router/NamenodeHeartbeatService.java    |  1 +
 .../federation/store/records/MembershipStats.java      |  4 ++++
 .../store/records/impl/pb/MembershipStatsPBImpl.java   | 10 ++++++++++
 .../src/main/proto/FederationProtocol.proto            |  1 +
 .../federation/metrics/TestFederationMetrics.java      |  7 +++++++
 .../federation/store/records/TestMembershipState.java  |  3 +++
 11 files changed, 60 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
index 79fb3e4..b37f5ef 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
@@ -107,6 +107,12 @@ public interface FederationMBean {
   int getNumDeadNodes();
 
   /**
+   * Get the number of stale datanodes.
+   * @return Number of stale datanodes.
+   */
+  int getNumStaleNodes();
+
+  /**
    * Get the number of decommissioning datanodes.
    * @return Number of decommissioning datanodes.
    */
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index 6a0a46e..b3fe6cc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -414,6 +414,12 @@ public class FederationMetrics implements FederationMBean {
   }
 
   @Override
+  public int getNumStaleNodes() {
+    return getNameserviceAggregatedInt(
+        MembershipStats::getNumOfStaleDatanodes);
+  }
+
+  @Override
   public int getNumDecommissioningNodes() {
     return getNameserviceAggregatedInt(
         MembershipStats::getNumOfDecommissioningDatanodes);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 25ec27c..5e95606 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -626,7 +626,12 @@ public class NamenodeBeanMetrics
 
   @Override
   public int getNumStaleDataNodes() {
-    return -1;
+    try {
+      return getFederationMetrics().getNumStaleNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of stale nodes", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
index 2707304..178db1b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
@@ -280,6 +280,7 @@ public class MembershipNamenodeResolver
           report.getNumDecommissioningDatanodes());
       stats.setNumOfActiveDatanodes(report.getNumLiveDatanodes());
       stats.setNumOfDeadDatanodes(report.getNumDeadDatanodes());
+      stats.setNumOfStaleDatanodes(report.getNumStaleDatanodes());
       stats.setNumOfDecomActiveDatanodes(report.getNumDecomLiveDatanodes());
       stats.setNumOfDecomDeadDatanodes(report.getNumDecomDeadDatanodes());
       record.setStats(stats);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
index b121e24..5b603fa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
@@ -42,6 +42,7 @@ public class NamenodeStatusReport {
   /** Datanodes stats. */
   private int liveDatanodes = -1;
   private int deadDatanodes = -1;
+  private int staleDatanodes = -1;
   /** Decommissioning datanodes. */
   private int decomDatanodes = -1;
   /** Live decommissioned datanodes. */
@@ -223,14 +224,16 @@ public class NamenodeStatusReport {
    *
    * @param numLive Number of live nodes.
    * @param numDead Number of dead nodes.
+   * @param numStale Number of stale nodes.
    * @param numDecom Number of decommissioning nodes.
    * @param numLiveDecom Number of decommissioned live nodes.
    * @param numDeadDecom Number of decommissioned dead nodes.
    */
-  public void setDatanodeInfo(int numLive, int numDead, int numDecom,
-      int numLiveDecom, int numDeadDecom) {
+  public void setDatanodeInfo(int numLive, int numDead, int numStale,
+      int numDecom, int numLiveDecom, int numDeadDecom) {
     this.liveDatanodes = numLive;
     this.deadDatanodes = numDead;
+    this.staleDatanodes = numStale;
     this.decomDatanodes = numDecom;
     this.liveDecomDatanodes = numLiveDecom;
     this.deadDecomDatanodes = numDeadDecom;
@@ -247,7 +250,7 @@ public class NamenodeStatusReport {
   }
 
   /**
-   * Get the number of dead blocks.
+   * Get the number of dead nodes.
    *
    * @return The number of dead nodes.
    */
@@ -256,6 +259,15 @@ public class NamenodeStatusReport {
   }
 
   /**
+   * Get the number of stale nodes.
+   *
+   * @return The number of stale nodes.
+   */
+  public int getNumStaleDatanodes() {
+    return this.staleDatanodes;
+  }
+
+  /**
    * Get the number of decommissionining nodes.
    *
    * @return The number of decommissionining nodes.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index 871ebaf..475e90d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -338,6 +338,7 @@ public class NamenodeHeartbeatService extends PeriodicService {
             report.setDatanodeInfo(
                 jsonObject.getInt("NumLiveDataNodes"),
                 jsonObject.getInt("NumDeadDataNodes"),
+                jsonObject.getInt("NumStaleDataNodes"),
                 jsonObject.getInt("NumDecommissioningDataNodes"),
                 jsonObject.getInt("NumDecomLiveDataNodes"),
                 jsonObject.getInt("NumDecomDeadDataNodes"));
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
index 654140c..d452cd2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
@@ -81,6 +81,10 @@ public abstract class MembershipStats extends BaseRecord {
 
   public abstract int getNumOfDeadDatanodes();
 
+  public abstract void setNumOfStaleDatanodes(int nodes);
+
+  public abstract int getNumOfStaleDatanodes();
+
   public abstract void setNumOfDecommissioningDatanodes(int nodes);
 
   public abstract int getNumOfDecommissioningDatanodes();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
index 3347bc6..50ecbf3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
@@ -169,6 +169,16 @@ public class MembershipStatsPBImpl extends MembershipStats
   }
 
   @Override
+  public void setNumOfStaleDatanodes(int nodes) {
+    this.translator.getBuilder().setNumOfStaleDatanodes(nodes);
+  }
+
+  @Override
+  public int getNumOfStaleDatanodes() {
+    return this.translator.getProtoOrBuilder().getNumOfStaleDatanodes();
+  }
+
+  @Override
   public void setNumOfDecommissioningDatanodes(int nodes) {
     this.translator.getBuilder().setNumOfDecommissioningDatanodes(nodes);
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
index 17ae299..1e5e37b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
@@ -45,6 +45,7 @@ message NamenodeMembershipStatsRecordProto {
   optional uint32 numOfDecommissioningDatanodes = 22;
   optional uint32 numOfDecomActiveDatanodes = 23;
   optional uint32 numOfDecomDeadDatanodes = 24;
+  optional uint32 numOfStaleDatanodes = 25;
 }
 
 message NamenodeMembershipRecordProto {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
index 94799f3..5d984e8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
@@ -137,6 +137,8 @@ public class TestFederationMetrics extends TestMetricsBase {
           stats.getNumOfActiveDatanodes());
       assertEquals(json.getLong("numOfDeadDatanodes"),
           stats.getNumOfDeadDatanodes());
+      assertEquals(json.getLong("numOfStaleDatanodes"),
+          stats.getNumOfStaleDatanodes());
       assertEquals(json.getLong("numOfDecommissioningDatanodes"),
           stats.getNumOfDecommissioningDatanodes());
       assertEquals(json.getLong("numOfDecomActiveDatanodes"),
@@ -187,6 +189,8 @@ public class TestFederationMetrics extends TestMetricsBase {
           json.getLong("numOfActiveDatanodes"));
       assertEquals(stats.getNumOfDeadDatanodes(),
           json.getLong("numOfDeadDatanodes"));
+      assertEquals(stats.getNumOfStaleDatanodes(),
+          json.getLong("numOfStaleDatanodes"));
       assertEquals(stats.getNumOfDecommissioningDatanodes(),
           json.getLong("numOfDecommissioningDatanodes"));
       assertEquals(stats.getNumOfDecomActiveDatanodes(),
@@ -260,6 +264,7 @@ public class TestFederationMetrics extends TestMetricsBase {
     long numBlocks = 0;
     long numLive = 0;
     long numDead = 0;
+    long numStale = 0;
     long numDecom = 0;
     long numDecomLive = 0;
     long numDecomDead = 0;
@@ -269,6 +274,7 @@ public class TestFederationMetrics extends TestMetricsBase {
       numBlocks += stats.getNumOfBlocks();
       numLive += stats.getNumOfActiveDatanodes();
       numDead += stats.getNumOfDeadDatanodes();
+      numStale += stats.getNumOfStaleDatanodes();
       numDecom += stats.getNumOfDecommissioningDatanodes();
       numDecomLive += stats.getNumOfDecomActiveDatanodes();
       numDecomDead += stats.getNumOfDecomDeadDatanodes();
@@ -277,6 +283,7 @@ public class TestFederationMetrics extends TestMetricsBase {
     assertEquals(numBlocks, bean.getNumBlocks());
     assertEquals(numLive, bean.getNumLiveNodes());
     assertEquals(numDead, bean.getNumDeadNodes());
+    assertEquals(numStale, bean.getNumStaleNodes());
     assertEquals(numDecom, bean.getNumDecommissioningNodes());
     assertEquals(numDecomLive, bean.getNumDecomLiveNodes());
     assertEquals(numDecomDead, bean.getNumDecomDeadNodes());
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMembershipState.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMembershipState.java
index d922414..1aac632 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMembershipState.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMembershipState.java
@@ -47,6 +47,7 @@ public class TestMembershipState {
   private static final long NUM_BLOCKS = 300;
   private static final long NUM_FILES = 400;
   private static final int NUM_DEAD = 500;
+  private static final int NUM_STALE = 550;
   private static final int NUM_ACTIVE = 600;
   private static final int NUM_DECOM = 700;
   private static final int NUM_DECOM_ACTIVE = 800;
@@ -73,6 +74,7 @@ public class TestMembershipState {
     stats.setNumOfFiles(NUM_FILES);
     stats.setNumOfActiveDatanodes(NUM_ACTIVE);
     stats.setNumOfDeadDatanodes(NUM_DEAD);
+    stats.setNumOfStaleDatanodes(NUM_STALE);
     stats.setNumOfDecommissioningDatanodes(NUM_DECOM);
     stats.setNumOfDecomActiveDatanodes(NUM_DECOM_ACTIVE);
     stats.setNumOfDecomDeadDatanodes(NUM_DECOM_DEAD);
@@ -101,6 +103,7 @@ public class TestMembershipState {
     assertEquals(NUM_FILES, stats.getNumOfFiles());
     assertEquals(NUM_ACTIVE, stats.getNumOfActiveDatanodes());
     assertEquals(NUM_DEAD, stats.getNumOfDeadDatanodes());
+    assertEquals(NUM_STALE, stats.getNumOfStaleDatanodes());
     assertEquals(NUM_DECOM, stats.getNumOfDecommissioningDatanodes());
     assertEquals(NUM_DECOM_ACTIVE, stats.getNumOfDecomActiveDatanodes());
     assertEquals(NUM_DECOM_DEAD, stats.getNumOfDecomDeadDatanodes());


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 12/41: HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 16b8f759a1eafba5654767b3344e5b1a4740d851
Author: Surendra Singh Lilhore <su...@apache.org>
AuthorDate: Tue Dec 4 12:23:56 2018 +0530

    HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena.
---
 .../server/federation/router/FederationUtil.java   |  23 +-
 .../federation/router/RouterClientProtocol.java    |  29 +-
 .../federation/router/TestRouterMountTable.java    | 307 ++++++++++++++++-----
 3 files changed, 278 insertions(+), 81 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
index f8c7a9b..f0d9168 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
@@ -27,6 +27,7 @@ import java.net.URLConnection;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
@@ -205,4 +206,24 @@ public final class FederationUtil {
     return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
         || parent.equals(Path.SEPARATOR);
   }
-}
+
+  /**
+   * Add the the number of children for an existing HdfsFileStatus object.
+   * @param dirStatus HdfsfileStatus object.
+   * @param children number of children to be added.
+   * @return HdfsFileStatus with the number of children specified.
+   */
+  public static HdfsFileStatus updateMountPointStatus(HdfsFileStatus dirStatus,
+      int children) {
+    return new HdfsFileStatus.Builder().atime(dirStatus.getAccessTime())
+        .blocksize(dirStatus.getBlockSize()).children(children)
+        .ecPolicy(dirStatus.getErasureCodingPolicy())
+        .feInfo(dirStatus.getFileEncryptionInfo()).fileId(dirStatus.getFileId())
+        .group(dirStatus.getGroup()).isdir(dirStatus.isDir())
+        .length(dirStatus.getLen()).mtime(dirStatus.getModificationTime())
+        .owner(dirStatus.getOwner()).path(dirStatus.getLocalNameInBytes())
+        .perm(dirStatus.getPermission()).replication(dirStatus.getReplication())
+        .storagePolicy(dirStatus.getStoragePolicy())
+        .symlink(dirStatus.getSymlinkInBytes()).build();
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 81717ca..2089c57 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.updateMountPointStatus;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
 import org.apache.hadoop.fs.BatchedRemoteIterator;
@@ -669,7 +670,6 @@ public class RouterClientProtocol implements ClientProtocol {
         if (dates != null && dates.containsKey(child)) {
           date = dates.get(child);
         }
-        // TODO add number of children
         HdfsFileStatus dirStatus = getMountPointStatus(child, 0, date);
 
         // This may overwrite existing listing entries with the mount point
@@ -1663,12 +1663,13 @@ public class RouterClientProtocol implements ClientProtocol {
     // Get the file info from everybody
     Map<RemoteLocation, HdfsFileStatus> results =
         rpcClient.invokeConcurrent(locations, method, HdfsFileStatus.class);
-
+    int children=0;
     // We return the first file
     HdfsFileStatus dirStatus = null;
     for (RemoteLocation loc : locations) {
       HdfsFileStatus fileStatus = results.get(loc);
       if (fileStatus != null) {
+        children += fileStatus.getChildrenNum();
         if (!fileStatus.isDirectory()) {
           return fileStatus;
         } else if (dirStatus == null) {
@@ -1676,7 +1677,10 @@ public class RouterClientProtocol implements ClientProtocol {
         }
       }
     }
-    return dirStatus;
+    if (dirStatus != null) {
+      return updateMountPointStatus(dirStatus, children);
+    }
+    return null;
   }
 
   /**
@@ -1732,12 +1736,23 @@ public class RouterClientProtocol implements ClientProtocol {
     String group = this.superGroup;
     if (subclusterResolver instanceof MountTableResolver) {
       try {
+        String mName = name.startsWith("/") ? name : "/" + name;
         MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
-        MountTable entry = mountTable.getMountPoint(name);
+        MountTable entry = mountTable.getMountPoint(mName);
         if (entry != null) {
-          permission = entry.getMode();
-          owner = entry.getOwnerName();
-          group = entry.getGroupName();
+          HdfsFileStatus fInfo = getFileInfoAll(entry.getDestinations(),
+              new RemoteMethod("getFileInfo", new Class<?>[] {String.class},
+                  new RemoteParam()));
+          if (fInfo != null) {
+            permission = fInfo.getPermission();
+            owner = fInfo.getOwner();
+            group = fInfo.getGroup();
+            childrenNum = fInfo.getChildrenNum();
+          } else {
+            permission = entry.getMode();
+            owner = entry.getOwnerName();
+            group = entry.getGroupName();
+          }
         }
       } catch (IOException e) {
         LOG.error("Cannot get mount point: {}", e.getMessage());
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
index d2b78d3..9538d71 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
@@ -60,18 +61,21 @@ import org.junit.Test;
 public class TestRouterMountTable {
 
   private static StateStoreDFSCluster cluster;
-  private static NamenodeContext nnContext;
+  private static NamenodeContext nnContext0;
+  private static NamenodeContext nnContext1;
   private static RouterContext routerContext;
   private static MountTableResolver mountTable;
   private static ClientProtocol routerProtocol;
   private static long startTime;
+  private static FileSystem nnFs0;
+  private static FileSystem nnFs1;
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
     startTime = Time.now();
 
     // Build and start a federated cluster
-    cluster = new StateStoreDFSCluster(false, 1);
+    cluster = new StateStoreDFSCluster(false, 2);
     Configuration conf = new RouterConfigBuilder()
         .stateStore()
         .admin()
@@ -83,7 +87,10 @@ public class TestRouterMountTable {
     cluster.waitClusterUp();
 
     // Get the end points
-    nnContext = cluster.getRandomNamenode();
+    nnContext0 = cluster.getNamenode("ns0", null);
+    nnContext1 = cluster.getNamenode("ns1", null);
+    nnFs0 = nnContext0.getFileSystem();
+    nnFs1 = nnContext1.getFileSystem();
     routerContext = cluster.getRandomRouter();
     Router router = routerContext.getRouter();
     routerProtocol = routerContext.getClient().getNamenode();
@@ -129,12 +136,11 @@ public class TestRouterMountTable {
     assertTrue(addMountTable(regularEntry));
 
     // Create a folder which should show in all locations
-    final FileSystem nnFs = nnContext.getFileSystem();
     final FileSystem routerFs = routerContext.getFileSystem();
     assertTrue(routerFs.mkdirs(new Path("/regular/newdir")));
 
     FileStatus dirStatusNn =
-        nnFs.getFileStatus(new Path("/testdir/newdir"));
+        nnFs0.getFileStatus(new Path("/testdir/newdir"));
     assertTrue(dirStatusNn.isDirectory());
     FileStatus dirStatusRegular =
         routerFs.getFileStatus(new Path("/regular/newdir"));
@@ -179,93 +185,248 @@ public class TestRouterMountTable {
    */
   @Test
   public void testListFilesTime() throws Exception {
-    // Add mount table entry
-    MountTable addEntry = MountTable.newInstance(
-        "/testdir", Collections.singletonMap("ns0", "/testdir"));
-    assertTrue(addMountTable(addEntry));
-    addEntry = MountTable.newInstance(
-        "/testdir2", Collections.singletonMap("ns0", "/testdir2"));
-    assertTrue(addMountTable(addEntry));
-    addEntry = MountTable.newInstance(
-        "/testdir/subdir", Collections.singletonMap("ns0", "/testdir/subdir"));
-    assertTrue(addMountTable(addEntry));
-    addEntry = MountTable.newInstance(
-        "/testdir3/subdir1", Collections.singletonMap("ns0", "/testdir3"));
-    assertTrue(addMountTable(addEntry));
-    addEntry = MountTable.newInstance(
-        "/testA/testB/testC/testD", Collections.singletonMap("ns0", "/test"));
-    assertTrue(addMountTable(addEntry));
+    try {
+      // Add mount table entry
+      MountTable addEntry = MountTable.newInstance("/testdir",
+          Collections.singletonMap("ns0", "/testdir"));
+      assertTrue(addMountTable(addEntry));
+      addEntry = MountTable.newInstance("/testdir2",
+          Collections.singletonMap("ns0", "/testdir2"));
+      assertTrue(addMountTable(addEntry));
+      addEntry = MountTable.newInstance("/testdir/subdir",
+          Collections.singletonMap("ns0", "/testdir/subdir"));
+      assertTrue(addMountTable(addEntry));
+      addEntry = MountTable.newInstance("/testdir3/subdir1",
+          Collections.singletonMap("ns0", "/testdir3"));
+      assertTrue(addMountTable(addEntry));
+      addEntry = MountTable.newInstance("/testA/testB/testC/testD",
+          Collections.singletonMap("ns0", "/test"));
+      assertTrue(addMountTable(addEntry));
 
-    // Create test dir in NN
-    final FileSystem nnFs = nnContext.getFileSystem();
-    assertTrue(nnFs.mkdirs(new Path("/newdir")));
+      // Create test dir in NN
+      assertTrue(nnFs0.mkdirs(new Path("/newdir")));
 
-    Map<String, Long> pathModTime = new TreeMap<>();
-    for (String mount : mountTable.getMountPoints("/")) {
-      if (mountTable.getMountPoint("/"+mount) != null) {
-        pathModTime.put(mount, mountTable.getMountPoint("/"+mount)
-            .getDateModified());
-      } else {
-        List<MountTable> entries = mountTable.getMounts("/"+mount);
-        for (MountTable entry : entries) {
-          if (pathModTime.get(mount) == null ||
-              pathModTime.get(mount) < entry.getDateModified()) {
-            pathModTime.put(mount, entry.getDateModified());
+      Map<String, Long> pathModTime = new TreeMap<>();
+      for (String mount : mountTable.getMountPoints("/")) {
+        if (mountTable.getMountPoint("/" + mount) != null) {
+          pathModTime.put(mount,
+              mountTable.getMountPoint("/" + mount).getDateModified());
+        } else {
+          List<MountTable> entries = mountTable.getMounts("/" + mount);
+          for (MountTable entry : entries) {
+            if (pathModTime.get(mount) == null
+                || pathModTime.get(mount) < entry.getDateModified()) {
+              pathModTime.put(mount, entry.getDateModified());
+            }
           }
         }
       }
+      FileStatus[] iterator = nnFs0.listStatus(new Path("/"));
+      for (FileStatus file : iterator) {
+        pathModTime.put(file.getPath().getName(), file.getModificationTime());
+      }
+      // Fetch listing
+      DirectoryListing listing =
+          routerProtocol.getListing("/", HdfsFileStatus.EMPTY_NAME, false);
+      Iterator<String> pathModTimeIterator = pathModTime.keySet().iterator();
+
+      // Match date/time for each path returned
+      for (HdfsFileStatus f : listing.getPartialListing()) {
+        String fileName = pathModTimeIterator.next();
+        String currentFile = f.getFullPath(new Path("/")).getName();
+        Long currentTime = f.getModificationTime();
+        Long expectedTime = pathModTime.get(currentFile);
+
+        assertEquals(currentFile, fileName);
+        assertTrue(currentTime > startTime);
+        assertEquals(currentTime, expectedTime);
+      }
+      // Verify the total number of results found/matched
+      assertEquals(pathModTime.size(), listing.getPartialListing().length);
+    } finally {
+      nnFs0.delete(new Path("/newdir"), true);
     }
-    FileStatus[] iterator = nnFs.listStatus(new Path("/"));
-    for (FileStatus file : iterator) {
-      pathModTime.put(file.getPath().getName(), file.getModificationTime());
+  }
+
+  /**
+   * Verify permission for a mount point when the actual destination is not
+   * present. It returns the permissions of the mount point.
+   */
+  @Test
+  public void testMountTablePermissionsNoDest() throws IOException {
+    MountTable addEntry;
+    addEntry = MountTable.newInstance("/testdir1",
+        Collections.singletonMap("ns0", "/tmp/testdir1"));
+    addEntry.setGroupName("group1");
+    addEntry.setOwnerName("owner1");
+    addEntry.setMode(FsPermission.createImmutable((short) 0775));
+    assertTrue(addMountTable(addEntry));
+    FileStatus[] list = routerContext.getFileSystem().listStatus(new Path("/"));
+    assertEquals("group1", list[0].getGroup());
+    assertEquals("owner1", list[0].getOwner());
+    assertEquals((short) 0775, list[0].getPermission().toShort());
+  }
+
+  /**
+   * Verify permission for a mount point when the actual destination present. It
+   * returns the permissions of the actual destination pointed by the mount
+   * point.
+   */
+  @Test
+  public void testMountTablePermissionsWithDest() throws IOException {
+    try {
+      MountTable addEntry = MountTable.newInstance("/testdir",
+          Collections.singletonMap("ns0", "/tmp/testdir"));
+      assertTrue(addMountTable(addEntry));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs0.setOwner(new Path("/tmp/testdir"), "Aowner", "Agroup");
+      nnFs0.setPermission(new Path("/tmp/testdir"),
+          FsPermission.createImmutable((short) 775));
+      FileStatus[] list =
+          routerContext.getFileSystem().listStatus(new Path("/"));
+      assertEquals("Agroup", list[0].getGroup());
+      assertEquals("Aowner", list[0].getOwner());
+      assertEquals((short) 775, list[0].getPermission().toShort());
+    } finally {
+      nnFs0.delete(new Path("/tmp"), true);
     }
-    // Fetch listing
-    DirectoryListing listing =
-        routerProtocol.getListing("/", HdfsFileStatus.EMPTY_NAME, false);
-    Iterator<String> pathModTimeIterator = pathModTime.keySet().iterator();
+  }
 
-    // Match date/time for each path returned
-    for(HdfsFileStatus f : listing.getPartialListing()) {
-      String fileName = pathModTimeIterator.next();
-      String currentFile = f.getFullPath(new Path("/")).getName();
-      Long currentTime = f.getModificationTime();
-      Long expectedTime = pathModTime.get(currentFile);
+  /**
+   * Verify permission for a mount point when the multiple destinations are
+   * present with both having same permissions. It returns the same actual
+   * permissions of the actual destinations pointed by the mount point.
+   */
+  @Test
+  public void testMountTablePermissionsMultiDest() throws IOException {
+    try {
+      Map<String, String> destMap = new HashMap<>();
+      destMap.put("ns0", "/tmp/testdir");
+      destMap.put("ns1", "/tmp/testdir01");
+      MountTable addEntry = MountTable.newInstance("/testdir", destMap);
+      assertTrue(addMountTable(addEntry));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs0.setOwner(new Path("/tmp/testdir"), "Aowner", "Agroup");
+      nnFs0.setPermission(new Path("/tmp/testdir"),
+          FsPermission.createImmutable((short) 775));
+      nnFs1.mkdirs(new Path("/tmp/testdir01"));
+      nnFs1.setOwner(new Path("/tmp/testdir01"), "Aowner", "Agroup");
+      nnFs1.setPermission(new Path("/tmp/testdir01"),
+          FsPermission.createImmutable((short) 775));
+      FileStatus[] list =
+          routerContext.getFileSystem().listStatus(new Path("/"));
+      assertEquals("Agroup", list[0].getGroup());
+      assertEquals("Aowner", list[0].getOwner());
+      assertEquals((short) 775, list[0].getPermission().toShort());
+    } finally {
+      nnFs0.delete(new Path("/tmp"), true);
+      nnFs1.delete(new Path("/tmp"), true);
+    }
+  }
 
-      assertEquals(currentFile, fileName);
-      assertTrue(currentTime > startTime);
-      assertEquals(currentTime, expectedTime);
+  /**
+   * Verify permission for a mount point when the multiple destinations are
+   * present with both having different permissions. It returns the actual
+   * permissions of either of the actual destinations pointed by the mount
+   * point.
+   */
+  @Test
+  public void testMountTablePermissionsMultiDestDifferentPerm()
+      throws IOException {
+    try {
+      Map<String, String> destMap = new HashMap<>();
+      destMap.put("ns0", "/tmp/testdir");
+      destMap.put("ns1", "/tmp/testdir01");
+      MountTable addEntry = MountTable.newInstance("/testdir", destMap);
+      assertTrue(addMountTable(addEntry));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs0.setOwner(new Path("/tmp/testdir"), "Aowner", "Agroup");
+      nnFs0.setPermission(new Path("/tmp/testdir"),
+          FsPermission.createImmutable((short) 775));
+      nnFs1.mkdirs(new Path("/tmp/testdir01"));
+      nnFs1.setOwner(new Path("/tmp/testdir01"), "Aowner01", "Agroup01");
+      nnFs1.setPermission(new Path("/tmp/testdir01"),
+          FsPermission.createImmutable((short) 755));
+      FileStatus[] list =
+          routerContext.getFileSystem().listStatus(new Path("/"));
+      assertTrue("Agroup".equals(list[0].getGroup())
+          || "Agroup01".equals(list[0].getGroup()));
+      assertTrue("Aowner".equals(list[0].getOwner())
+          || "Aowner01".equals(list[0].getOwner()));
+      assertTrue(((short) 775) == list[0].getPermission().toShort()
+          || ((short) 755) == list[0].getPermission().toShort());
+    } finally {
+      nnFs0.delete(new Path("/tmp"), true);
+      nnFs1.delete(new Path("/tmp"), true);
     }
-    // Verify the total number of results found/matched
-    assertEquals(pathModTime.size(), listing.getPartialListing().length);
   }
 
   /**
-   * Verify that the file listing contains correct permission.
+   * Validate whether mount point name gets resolved or not. On successful
+   * resolution the details returned would be the ones actually set on the mount
+   * point.
    */
   @Test
-  public void testMountTablePermissions() throws Exception {
-    // Add mount table entries
-    MountTable addEntry = MountTable.newInstance(
-        "/testdir1", Collections.singletonMap("ns0", "/testdir1"));
+  public void testMountPointResolved() throws IOException {
+    MountTable addEntry = MountTable.newInstance("/testdir",
+        Collections.singletonMap("ns0", "/tmp/testdir"));
     addEntry.setGroupName("group1");
     addEntry.setOwnerName("owner1");
-    addEntry.setMode(FsPermission.createImmutable((short)0775));
-    assertTrue(addMountTable(addEntry));
-    addEntry = MountTable.newInstance(
-        "/testdir2", Collections.singletonMap("ns0", "/testdir2"));
-    addEntry.setGroupName("group2");
-    addEntry.setOwnerName("owner2");
-    addEntry.setMode(FsPermission.createImmutable((short)0755));
     assertTrue(addMountTable(addEntry));
+    HdfsFileStatus finfo = routerProtocol.getFileInfo("/testdir");
+    FileStatus[] finfo1 =
+        routerContext.getFileSystem().listStatus(new Path("/"));
+    assertEquals("owner1", finfo.getOwner());
+    assertEquals("owner1", finfo1[0].getOwner());
+    assertEquals("group1", finfo.getGroup());
+    assertEquals("group1", finfo1[0].getGroup());
+  }
 
-    HdfsFileStatus fs = routerProtocol.getFileInfo("/testdir1");
-    assertEquals("group1", fs.getGroup());
-    assertEquals("owner1", fs.getOwner());
-    assertEquals((short) 0775, fs.getPermission().toShort());
+  /**
+   * Validate the number of children for the mount point.It must be equal to the
+   * number of children of the destination pointed by the mount point.
+   */
+  @Test
+  public void testMountPointChildren() throws IOException {
+    try {
+      MountTable addEntry = MountTable.newInstance("/testdir",
+          Collections.singletonMap("ns0", "/tmp/testdir"));
+      assertTrue(addMountTable(addEntry));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs0.mkdirs(new Path("/tmp/testdir/1"));
+      nnFs0.mkdirs(new Path("/tmp/testdir/2"));
+      FileStatus[] finfo1 =
+          routerContext.getFileSystem().listStatus(new Path("/"));
+      assertEquals(2, ((HdfsFileStatus) finfo1[0]).getChildrenNum());
+    } finally {
+      nnFs0.delete(new Path("/tmp"), true);
+    }
+  }
 
-    fs = routerProtocol.getFileInfo("/testdir2");
-    assertEquals("group2", fs.getGroup());
-    assertEquals("owner2", fs.getOwner());
-    assertEquals((short) 0755, fs.getPermission().toShort());
+  /**
+   * Validate the number of children for the mount point pointing to multiple
+   * destinations.It must be equal to the sum of number of children of the
+   * destinations pointed by the mount point.
+   */
+  @Test
+  public void testMountPointChildrenMultiDest() throws IOException {
+    try {
+      Map<String, String> destMap = new HashMap<>();
+      destMap.put("ns0", "/tmp/testdir");
+      destMap.put("ns1", "/tmp/testdir01");
+      MountTable addEntry = MountTable.newInstance("/testdir", destMap);
+      assertTrue(addMountTable(addEntry));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs0.mkdirs(new Path("/tmp/testdir"));
+      nnFs1.mkdirs(new Path("/tmp/testdir01"));
+      nnFs0.mkdirs(new Path("/tmp/testdir/1"));
+      nnFs1.mkdirs(new Path("/tmp/testdir01/1"));
+      FileStatus[] finfo1 =
+          routerContext.getFileSystem().listStatus(new Path("/"));
+      assertEquals(2, ((HdfsFileStatus) finfo1[0]).getChildrenNum());
+    } finally {
+      nnFs0.delete(new Path("/tmp"), true);
+      nnFs0.delete(new Path("/tmp"), true);
+    }
   }
 }
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 19/41: HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c49a422d89dcd3815d9800e1efeb7fdae3269a19
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Wed Dec 19 11:40:00 2018 +0800

    HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.
---
 .../RouterAdminProtocolServerSideTranslatorPB.java |  23 ++
 .../RouterAdminProtocolTranslatorPB.java           |  21 ++
 .../federation/resolver/MountTableManager.java     |  16 +
 .../router/MountTableRefresherService.java         | 289 +++++++++++++++
 .../router/MountTableRefresherThread.java          |  96 +++++
 .../server/federation/router/RBFConfigKeys.java    |  25 ++
 .../hdfs/server/federation/router/Router.java      |  53 ++-
 .../federation/router/RouterAdminServer.java       |  28 +-
 .../federation/router/RouterHeartbeatService.java  |   5 +
 .../server/federation/store/MountTableStore.java   |  24 ++
 .../server/federation/store/StateStoreUtils.java   |  26 ++
 .../federation/store/impl/MountTableStoreImpl.java |  18 +
 .../protocol/RefreshMountTableEntriesRequest.java  |  34 ++
 .../protocol/RefreshMountTableEntriesResponse.java |  44 +++
 .../pb/RefreshMountTableEntriesRequestPBImpl.java  |  67 ++++
 .../pb/RefreshMountTableEntriesResponsePBImpl.java |  74 ++++
 .../federation/store/records/RouterState.java      |   4 +
 .../store/records/impl/pb/RouterStatePBImpl.java   |  10 +
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  33 +-
 .../src/main/proto/FederationProtocol.proto        |   8 +
 .../src/main/proto/RouterProtocol.proto            |   5 +
 .../src/main/resources/hdfs-rbf-default.xml        |  34 ++
 .../src/site/markdown/HDFSRouterFederation.md      |   9 +
 .../server/federation/FederationTestUtils.java     |  27 ++
 .../server/federation/RouterConfigBuilder.java     |  12 +
 .../federation/router/TestRouterAdminCLI.java      |  25 +-
 .../router/TestRouterMountTableCacheRefresh.java   | 396 +++++++++++++++++++++
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 28 files changed, 1402 insertions(+), 6 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 6341ebd..a31c46d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -37,6 +37,8 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -58,6 +60,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -78,6 +82,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeMo
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeResponsePBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesRequestPBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.UpdateMountTableEntryRequestPBImpl;
@@ -275,4 +281,21 @@ public class RouterAdminProtocolServerSideTranslatorPB implements
       throw new ServiceException(e);
     }
   }
+
+  @Override
+  public RefreshMountTableEntriesResponseProto refreshMountTableEntries(
+      RpcController controller, RefreshMountTableEntriesRequestProto request)
+      throws ServiceException {
+    try {
+      RefreshMountTableEntriesRequest req =
+          new RefreshMountTableEntriesRequestPBImpl(request);
+      RefreshMountTableEntriesResponse response =
+          server.refreshMountTableEntries(req);
+      RefreshMountTableEntriesResponsePBImpl responsePB =
+          (RefreshMountTableEntriesResponsePBImpl) response;
+      return responsePB.getProto();
+    } catch (IOException e) {
+      throw new ServiceException(e);
+    }
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
index 6e24438..1fbb06d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
@@ -38,6 +38,8 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -61,6 +63,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -77,6 +81,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountT
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.LeaveSafeModeResponsePBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesRequestPBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RefreshMountTableEntriesResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.RemoveMountTableEntryResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.UpdateMountTableEntryRequestPBImpl;
@@ -267,4 +273,19 @@ public class RouterAdminProtocolTranslatorPB
       throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
     }
   }
+
+  @Override
+  public RefreshMountTableEntriesResponse refreshMountTableEntries(
+      RefreshMountTableEntriesRequest request) throws IOException {
+    RefreshMountTableEntriesRequestPBImpl requestPB =
+        (RefreshMountTableEntriesRequestPBImpl) request;
+    RefreshMountTableEntriesRequestProto proto = requestPB.getProto();
+    try {
+      RefreshMountTableEntriesResponseProto response =
+          rpcProxy.refreshMountTableEntries(null, proto);
+      return new RefreshMountTableEntriesResponsePBImpl(response);
+    } catch (ServiceException e) {
+      throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
+    }
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
index c2e4a5b..9a1e416 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
@@ -23,6 +23,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntr
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -77,4 +79,18 @@ public interface MountTableManager {
    */
   GetMountTableEntriesResponse getMountTableEntries(
       GetMountTableEntriesRequest request) throws IOException;
+
+  /**
+   * Refresh mount table entries cache from the state store. Cache is updated
+   * periodically but with this API cache can be refreshed immediately. This API
+   * is primarily meant to be called from the Admin Server. Admin Server will
+   * call this API and refresh mount table cache of all the routers while
+   * changing mount table entries.
+   *
+   * @param request Fully populated request object.
+   * @return True the mount table entry was updated without any error.
+   * @throws IOException Throws exception if the data store is not initialized.
+   */
+  RefreshMountTableEntriesResponse refreshMountTableEntries(
+      RefreshMountTableEntriesRequest request) throws IOException;
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherService.java
new file mode 100644
index 0000000..fafcef4
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherService.java
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils;
+import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.service.AbstractService;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.cache.CacheBuilder;
+import com.google.common.cache.CacheLoader;
+import com.google.common.cache.LoadingCache;
+import com.google.common.cache.RemovalListener;
+import com.google.common.cache.RemovalNotification;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * This service is invoked from {@link MountTableStore} when there is change in
+ * mount table entries and it updates mount table entry cache on local router as
+ * well as on all remote routers. Refresh on local router is done by calling
+ * {@link MountTableStore#loadCache(boolean)}} API directly, no RPC call
+ * involved, but on remote routers refresh is done through RouterClient(RPC
+ * call). To improve performance, all routers are refreshed in separate thread
+ * and all connection are cached. Cached connections are removed from
+ * cache and closed when their max live time is elapsed.
+ */
+public class MountTableRefresherService extends AbstractService {
+  private static final String ROUTER_CONNECT_ERROR_MSG =
+      "Router {} connection failed. Mount table cache will not refesh.";
+  private static final Logger LOG =
+      LoggerFactory.getLogger(MountTableRefresherService.class);
+
+  /** Local router. */
+  private final Router router;
+  /** Mount table store. */
+  private MountTableStore mountTableStore;
+  /** Local router admin address in the form of host:port. */
+  private String localAdminAdress;
+  /** Timeout in ms to update mount table cache on all the routers. */
+  private long cacheUpdateTimeout;
+
+  /**
+   * All router admin clients cached. So no need to create the client again and
+   * again. Router admin address(host:port) is used as key to cache RouterClient
+   * objects.
+   */
+  private LoadingCache<String, RouterClient> routerClientsCache;
+
+  /**
+   * Removes expired RouterClient from routerClientsCache.
+   */
+  private ScheduledExecutorService clientCacheCleanerScheduler;
+
+  /**
+   * Create a new service to refresh mount table cache when there is change in
+   * mount table entries.
+   *
+   * @param router whose mount table cache will be refreshed
+   */
+  public MountTableRefresherService(Router router) {
+    super(MountTableRefresherService.class.getSimpleName());
+    this.router = router;
+  }
+
+  @Override
+  protected void serviceInit(Configuration conf) throws Exception {
+    super.serviceInit(conf);
+    this.mountTableStore = getMountTableStore();
+    // attach this service to mount table store.
+    this.mountTableStore.setRefreshService(this);
+    this.localAdminAdress =
+        StateStoreUtils.getHostPortString(router.getAdminServerAddress());
+    this.cacheUpdateTimeout = conf.getTimeDuration(
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_TIMEOUT,
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_TIMEOUT_DEFAULT,
+        TimeUnit.MILLISECONDS);
+    long routerClientMaxLiveTime = conf.getTimeDuration(
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME,
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME_DEFAULT,
+        TimeUnit.MILLISECONDS);
+    routerClientsCache = CacheBuilder.newBuilder()
+        .expireAfterWrite(routerClientMaxLiveTime, TimeUnit.MILLISECONDS)
+        .removalListener(getClientRemover()).build(getClientCreator());
+
+    initClientCacheCleaner(routerClientMaxLiveTime);
+  }
+
+  private void initClientCacheCleaner(long routerClientMaxLiveTime) {
+    clientCacheCleanerScheduler =
+        Executors.newSingleThreadScheduledExecutor(new ThreadFactoryBuilder()
+        .setNameFormat("MountTableRefresh_ClientsCacheCleaner")
+        .setDaemon(true).build());
+    /*
+     * When cleanUp() method is called, expired RouterClient will be removed and
+     * closed.
+     */
+    clientCacheCleanerScheduler.scheduleWithFixedDelay(
+        () -> routerClientsCache.cleanUp(), routerClientMaxLiveTime,
+        routerClientMaxLiveTime, TimeUnit.MILLISECONDS);
+  }
+
+  /**
+   * Create cache entry remove listener.
+   */
+  private RemovalListener<String, RouterClient> getClientRemover() {
+    return new RemovalListener<String, RouterClient>() {
+      @Override
+      public void onRemoval(
+          RemovalNotification<String, RouterClient> notification) {
+          closeRouterClient(notification.getValue());
+      }
+    };
+  }
+
+  @VisibleForTesting
+  protected void closeRouterClient(RouterClient client) {
+    try {
+      client.close();
+    } catch (IOException e) {
+      LOG.error("Error while closing RouterClient", e);
+    }
+  }
+
+  /**
+   * Creates RouterClient and caches it.
+   */
+  private CacheLoader<String, RouterClient> getClientCreator() {
+    return new CacheLoader<String, RouterClient>() {
+      public RouterClient load(String adminAddress) throws IOException {
+        InetSocketAddress routerSocket =
+            NetUtils.createSocketAddr(adminAddress);
+        Configuration config = getConfig();
+        return createRouterClient(routerSocket, config);
+      }
+    };
+  }
+
+  @VisibleForTesting
+  protected RouterClient createRouterClient(InetSocketAddress routerSocket,
+      Configuration config) throws IOException {
+    return new RouterClient(routerSocket, config);
+  }
+
+  @Override
+  protected void serviceStart() throws Exception {
+    super.serviceStart();
+  }
+
+  @Override
+  protected void serviceStop() throws Exception {
+    super.serviceStop();
+    clientCacheCleanerScheduler.shutdown();
+    // remove and close all admin clients
+    routerClientsCache.invalidateAll();
+  }
+
+  private MountTableStore getMountTableStore() throws IOException {
+    MountTableStore mountTblStore =
+        router.getStateStore().getRegisteredRecordStore(MountTableStore.class);
+    if (mountTblStore == null) {
+      throw new IOException("Mount table state store is not available.");
+    }
+    return mountTblStore;
+  }
+
+  /**
+   * Refresh mount table cache of this router as well as all other routers.
+   */
+  public void refresh() throws StateStoreUnavailableException {
+    List<RouterState> cachedRecords =
+        router.getRouterStateManager().getCachedRecords();
+    List<MountTableRefresherThread> refreshThreads = new ArrayList<>();
+    for (RouterState routerState : cachedRecords) {
+      String adminAddress = routerState.getAdminAddress();
+      if (adminAddress == null || adminAddress.length() == 0) {
+        // this router has not enabled router admin
+        continue;
+      }
+      // No use of calling refresh on router which is not running state
+      if (routerState.getStatus() != RouterServiceState.RUNNING) {
+        LOG.info(
+            "Router {} is not running. Mount table cache will not refesh.");
+        // remove if RouterClient is cached.
+        removeFromCache(adminAddress);
+      } else if (isLocalAdmin(adminAddress)) {
+        /*
+         * Local router's cache update does not require RPC call, so no need for
+         * RouterClient
+         */
+        refreshThreads.add(getLocalRefresher(adminAddress));
+      } else {
+        try {
+          RouterClient client = routerClientsCache.get(adminAddress);
+          refreshThreads.add(new MountTableRefresherThread(
+              client.getMountTableManager(), adminAddress));
+        } catch (ExecutionException execExcep) {
+          // Can not connect, seems router is stopped now.
+          LOG.warn(ROUTER_CONNECT_ERROR_MSG, adminAddress, execExcep);
+        }
+      }
+    }
+    if (!refreshThreads.isEmpty()) {
+      invokeRefresh(refreshThreads);
+    }
+  }
+
+  @VisibleForTesting
+  protected MountTableRefresherThread getLocalRefresher(String adminAddress) {
+    return new MountTableRefresherThread(router.getAdminServer(), adminAddress);
+  }
+
+  private void removeFromCache(String adminAddress) {
+    routerClientsCache.invalidate(adminAddress);
+  }
+
+  private void invokeRefresh(List<MountTableRefresherThread> refreshThreads) {
+    CountDownLatch countDownLatch = new CountDownLatch(refreshThreads.size());
+    // start all the threads
+    for (MountTableRefresherThread refThread : refreshThreads) {
+      refThread.setCountDownLatch(countDownLatch);
+      refThread.start();
+    }
+    try {
+      /*
+       * Wait for all the thread to complete, await method returns false if
+       * refresh is not finished within specified time
+       */
+      boolean allReqCompleted =
+          countDownLatch.await(cacheUpdateTimeout, TimeUnit.MILLISECONDS);
+      if (!allReqCompleted) {
+        LOG.warn("Not all router admins updated their cache");
+      }
+    } catch (InterruptedException e) {
+      LOG.error("Mount table cache refresher was interrupted.", e);
+    }
+    logResult(refreshThreads);
+  }
+
+  private boolean isLocalAdmin(String adminAddress) {
+    return adminAddress.contentEquals(localAdminAdress);
+  }
+
+  private void logResult(List<MountTableRefresherThread> refreshThreads) {
+    int succesCount = 0;
+    int failureCount = 0;
+    for (MountTableRefresherThread mountTableRefreshThread : refreshThreads) {
+      if (mountTableRefreshThread.isSuccess()) {
+        succesCount++;
+      } else {
+        failureCount++;
+        // remove RouterClient from cache so that new client is created
+        removeFromCache(mountTableRefreshThread.getAdminAddress());
+      }
+    }
+    LOG.info("Mount table entries cache refresh succesCount={},failureCount={}",
+        succesCount, failureCount);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherThread.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherThread.java
new file mode 100644
index 0000000..c9967a2
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/MountTableRefresherThread.java
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.util.concurrent.CountDownLatch;
+
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base class for updating mount table cache on all the router.
+ */
+public class MountTableRefresherThread extends Thread {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(MountTableRefresherThread.class);
+  private boolean success;
+  /** Admin server on which refreshed to be invoked. */
+  private String adminAddress;
+  private CountDownLatch countDownLatch;
+  private MountTableManager manager;
+
+  public MountTableRefresherThread(MountTableManager manager,
+      String adminAddress) {
+    this.manager = manager;
+    this.adminAddress = adminAddress;
+    setName("MountTableRefresh_" + adminAddress);
+    setDaemon(true);
+  }
+
+  /**
+   * Refresh mount table cache of local and remote routers. Local and remote
+   * routers will be refreshed differently. Lets understand what are the
+   * local and remote routers and refresh will be done differently on these
+   * routers. Suppose there are three routers R1, R2 and R3. User want to add
+   * new mount table entry. He will connect to only one router, not all the
+   * routers. Suppose He connects to R1 and calls add mount table entry through
+   * API or CLI. Now in this context R1 is local router, R2 and R3 are remote
+   * routers. Because add mount table entry is invoked on R1, R1 will update the
+   * cache locally it need not to make RPC call. But R1 will make RPC calls to
+   * update cache on R2 and R3.
+   */
+  @Override
+  public void run() {
+    try {
+      RefreshMountTableEntriesResponse refreshMountTableEntries =
+          manager.refreshMountTableEntries(
+              RefreshMountTableEntriesRequest.newInstance());
+      success = refreshMountTableEntries.getResult();
+    } catch (IOException e) {
+      LOG.error("Failed to refresh mount table entries cache at router {}",
+          adminAddress, e);
+    } finally {
+      countDownLatch.countDown();
+    }
+  }
+
+  /**
+   * @return true if cache was refreshed successfully.
+   */
+  public boolean isSuccess() {
+    return success;
+  }
+
+  public void setCountDownLatch(CountDownLatch countDownLatch) {
+    this.countDownLatch = countDownLatch;
+  }
+
+  @Override
+  public String toString() {
+    return "MountTableRefreshThread [success=" + success + ", adminAddress="
+        + adminAddress + "]";
+  }
+
+  public String getAdminAddress() {
+    return adminAddress;
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 0070de7..5e907c8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -204,6 +204,31 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "mount-table.max-cache-size";
   /** Remove cache entries if we have more than 10k. */
   public static final int FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT = 10000;
+  /**
+   * If true then cache updated immediately after mount table entry change
+   * otherwise it is updated periodically based configuration.
+   */
+  public static final String MOUNT_TABLE_CACHE_UPDATE =
+      FEDERATION_ROUTER_PREFIX + "mount-table.cache.update";
+  public static final boolean MOUNT_TABLE_CACHE_UPDATE_DEFAULT =
+      false;
+  /**
+   * Timeout to update mount table cache on all the routers.
+   */
+  public static final String MOUNT_TABLE_CACHE_UPDATE_TIMEOUT =
+      FEDERATION_ROUTER_PREFIX + "mount-table.cache.update.timeout";
+  public static final long MOUNT_TABLE_CACHE_UPDATE_TIMEOUT_DEFAULT =
+      TimeUnit.MINUTES.toMillis(1);
+  /**
+   * Remote router mount table cache is updated through RouterClient(RPC call).
+   * To improve performance, RouterClient connections are cached but it should
+   * not be kept in cache forever. This property defines the max time a
+   * connection can be cached.
+   */
+  public static final String MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME =
+      FEDERATION_ROUTER_PREFIX + "mount-table.cache.update.client.max.time";
+  public static final long MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME_DEFAULT =
+      TimeUnit.MINUTES.toMillis(5);
   public static final String FEDERATION_MOUNT_TABLE_CACHE_ENABLE =
       FEDERATION_ROUTER_PREFIX + "mount-table.cache.enable";
   public static final boolean FEDERATION_MOUNT_TABLE_CACHE_ENABLE_DEFAULT =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 3182e27..6a7437f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -254,9 +254,50 @@ public class Router extends CompositeService {
       addService(this.safemodeService);
     }
 
+    /*
+     * Refresh mount table cache immediately after adding, modifying or deleting
+     * the mount table entries. If this service is not enabled mount table cache
+     * are refreshed periodically by StateStoreCacheUpdateService
+     */
+    if (conf.getBoolean(RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE,
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_DEFAULT)) {
+      // There is no use of starting refresh service if state store and admin
+      // servers are not enabled
+      String disabledDependentServices = getDisabledDependentServices();
+      /*
+       * disabledDependentServices null means all dependent services are
+       * enabled.
+       */
+      if (disabledDependentServices == null) {
+
+        MountTableRefresherService refreshService =
+            new MountTableRefresherService(this);
+        addService(refreshService);
+        LOG.info("Service {} is enabled.",
+            MountTableRefresherService.class.getSimpleName());
+      } else {
+        LOG.warn(
+            "Service {} not enabled: depenendent service(s) {} not enabled.",
+            MountTableRefresherService.class.getSimpleName(),
+            disabledDependentServices);
+      }
+    }
+
     super.serviceInit(conf);
   }
 
+  private String getDisabledDependentServices() {
+    if (this.stateStore == null && this.adminServer == null) {
+      return StateStoreService.class.getSimpleName() + ","
+          + RouterAdminServer.class.getSimpleName();
+    } else if (this.stateStore == null) {
+      return StateStoreService.class.getSimpleName();
+    } else if (this.adminServer == null) {
+      return RouterAdminServer.class.getSimpleName();
+    }
+    return null;
+  }
+
   /**
    * Returns the hostname for this Router. If the hostname is not
    * explicitly configured in the given config, then it is determined.
@@ -696,9 +737,19 @@ public class Router extends CompositeService {
   }
 
   /**
-   * Get the Router safe mode service
+   * Get the Router safe mode service.
    */
   RouterSafemodeService getSafemodeService() {
     return this.safemodeService;
   }
+
+  /**
+   * Get router admin server.
+   *
+   * @return Null if admin is not enabled.
+   */
+  public RouterAdminServer getAdminServer() {
+    return adminServer;
+  }
+
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index f34dc41..5bb7751 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.store.DisabledNameserviceStore;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreCache;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.DisableNameserviceRequest;
@@ -55,6 +56,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -102,6 +105,7 @@ public class RouterAdminServer extends AbstractService
   private static String routerOwner;
   private static String superGroup;
   private static boolean isPermissionEnabled;
+  private boolean iStateStoreCache;
 
   public RouterAdminServer(Configuration conf, Router router)
       throws IOException {
@@ -154,6 +158,8 @@ public class RouterAdminServer extends AbstractService
     this.adminAddress = new InetSocketAddress(
         confRpcAddress.getHostName(), listenAddress.getPort());
     router.setAdminServerAddress(this.adminAddress);
+    iStateStoreCache =
+        router.getSubclusterResolver() instanceof StateStoreCache;
   }
 
   /**
@@ -243,7 +249,7 @@ public class RouterAdminServer extends AbstractService
         getMountTableStore().updateMountTableEntry(request);
 
     MountTable mountTable = request.getEntry();
-    if (mountTable != null) {
+    if (mountTable != null && router.isQuotaEnabled()) {
       synchronizeQuota(mountTable);
     }
     return response;
@@ -331,6 +337,26 @@ public class RouterAdminServer extends AbstractService
     return GetSafeModeResponse.newInstance(isInSafeMode);
   }
 
+  @Override
+  public RefreshMountTableEntriesResponse refreshMountTableEntries(
+      RefreshMountTableEntriesRequest request) throws IOException {
+    if (iStateStoreCache) {
+      /*
+       * MountTableResolver updates MountTableStore cache also. Expecting other
+       * SubclusterResolver implementations to update MountTableStore cache also
+       * apart from updating its cache.
+       */
+      boolean result = ((StateStoreCache) this.router.getSubclusterResolver())
+          .loadCache(true);
+      RefreshMountTableEntriesResponse response =
+          RefreshMountTableEntriesResponse.newInstance();
+      response.setResult(result);
+      return response;
+    } else {
+      return getMountTableStore().refreshMountTableEntries(request);
+    }
+  }
+
   /**
    * Verify if Router set safe mode state correctly.
    * @param isInSafeMode Expected state to be set.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHeartbeatService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHeartbeatService.java
index a7f02d3..c497d85 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHeartbeatService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHeartbeatService.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import org.apache.hadoop.hdfs.server.federation.store.RecordStore;
 import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreUtils;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RouterHeartbeatRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RouterHeartbeatResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord;
@@ -91,6 +92,10 @@ public class RouterHeartbeatService extends PeriodicService {
             getStateStoreVersion(MembershipStore.class),
             getStateStoreVersion(MountTableStore.class));
         record.setStateStoreVersion(stateStoreVersion);
+        // if admin server not started then hostPort will be empty
+        String hostPort =
+            StateStoreUtils.getHostPortString(router.getAdminServerAddress());
+        record.setAdminAddress(hostPort);
         RouterHeartbeatRequest request =
             RouterHeartbeatRequest.newInstance(record);
         RouterHeartbeatResponse response = routerStore.routerHeartbeat(request);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/MountTableStore.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/MountTableStore.java
index b439659..9d4b64b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/MountTableStore.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/MountTableStore.java
@@ -20,8 +20,11 @@ package org.apache.hadoop.hdfs.server.federation.store;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.router.MountTableRefresherService;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Management API for the HDFS mount table information stored in
@@ -42,8 +45,29 @@ import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 @InterfaceStability.Evolving
 public abstract class MountTableStore extends CachedRecordStore<MountTable>
     implements MountTableManager {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(MountTableStore.class);
+  private MountTableRefresherService refreshService;
 
   public MountTableStore(StateStoreDriver driver) {
     super(MountTable.class, driver);
   }
+
+  public void setRefreshService(MountTableRefresherService refreshService) {
+    this.refreshService = refreshService;
+  }
+
+  /**
+   * Update mount table cache of this router as well as all other routers.
+   */
+  protected void updateCacheAllRouters() {
+    if (refreshService != null) {
+      try {
+        refreshService.refresh();
+      } catch (StateStoreUnavailableException e) {
+        LOG.error("Cannot refresh mount table: state store not available", e);
+      }
+    }
+  }
+
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
index 924c96a..4b932d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
@@ -17,6 +17,9 @@
  */
 package org.apache.hadoop.hdfs.server.federation.store;
 
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.List;
 
@@ -110,4 +113,27 @@ public final class StateStoreUtils {
     }
     return matchingList;
   }
+
+  /**
+   * Returns address in form of host:port, empty string if address is null.
+   *
+   * @param address address
+   * @return host:port
+   */
+  public static String getHostPortString(InetSocketAddress address) {
+    if (null == address) {
+      return "";
+    }
+    String hostName = address.getHostName();
+    if (hostName.equals("0.0.0.0")) {
+      try {
+        hostName = InetAddress.getLocalHost().getHostName();
+      } catch (UnknownHostException e) {
+        LOG.error("Failed to get local host name", e);
+        return "";
+      }
+    }
+    return hostName + ":" + address.getPort();
+  }
+
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
index eb117d6..76c7e78 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
@@ -33,6 +33,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntr
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -68,6 +70,7 @@ public class MountTableStoreImpl extends MountTableStore {
     AddMountTableEntryResponse response =
         AddMountTableEntryResponse.newInstance();
     response.setStatus(status);
+    updateCacheAllRouters();
     return response;
   }
 
@@ -86,6 +89,7 @@ public class MountTableStoreImpl extends MountTableStore {
     UpdateMountTableEntryResponse response =
         UpdateMountTableEntryResponse.newInstance();
     response.setStatus(status);
+    updateCacheAllRouters();
     return response;
   }
 
@@ -110,6 +114,7 @@ public class MountTableStoreImpl extends MountTableStore {
     RemoveMountTableEntryResponse response =
         RemoveMountTableEntryResponse.newInstance();
     response.setStatus(status);
+    updateCacheAllRouters();
     return response;
   }
 
@@ -151,4 +156,17 @@ public class MountTableStoreImpl extends MountTableStore {
     response.setTimestamp(Time.now());
     return response;
   }
+
+  @Override
+  public RefreshMountTableEntriesResponse refreshMountTableEntries(
+      RefreshMountTableEntriesRequest request) throws IOException {
+    // Because this refresh is done through admin API, it should always be force
+    // refresh.
+    boolean result = loadCache(true);
+    RefreshMountTableEntriesResponse response =
+        RefreshMountTableEntriesResponse.newInstance();
+    response.setResult(result);
+    return response;
+  }
+
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesRequest.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesRequest.java
new file mode 100644
index 0000000..899afe7
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesRequest.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API request for refreshing mount table cached entries from state store.
+ */
+public abstract class RefreshMountTableEntriesRequest {
+
+  public static RefreshMountTableEntriesRequest newInstance()
+      throws IOException {
+    return StateStoreSerializer
+        .newRecord(RefreshMountTableEntriesRequest.class);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesResponse.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesResponse.java
new file mode 100644
index 0000000..6c9ed77
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/RefreshMountTableEntriesResponse.java
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API response for refreshing mount table entries cache from state store.
+ */
+public abstract class RefreshMountTableEntriesResponse {
+
+  public static RefreshMountTableEntriesResponse newInstance()
+      throws IOException {
+    return StateStoreSerializer
+        .newRecord(RefreshMountTableEntriesResponse.class);
+  }
+
+  @Public
+  @Unstable
+  public abstract boolean getResult();
+
+  @Public
+  @Unstable
+  public abstract void setResult(boolean result);
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesRequestPBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesRequestPBImpl.java
new file mode 100644
index 0000000..cec0699
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesRequestPBImpl.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto.Builder;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProtoOrBuilder;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * RefreshMountTableEntriesRequest.
+ */
+public class RefreshMountTableEntriesRequestPBImpl
+    extends RefreshMountTableEntriesRequest implements PBRecord {
+
+  private FederationProtocolPBTranslator<RefreshMountTableEntriesRequestProto,
+      Builder, RefreshMountTableEntriesRequestProtoOrBuilder> translator =
+          new FederationProtocolPBTranslator<>(
+              RefreshMountTableEntriesRequestProto.class);
+
+  public RefreshMountTableEntriesRequestPBImpl() {
+  }
+
+  public RefreshMountTableEntriesRequestPBImpl(
+      RefreshMountTableEntriesRequestProto proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public RefreshMountTableEntriesRequestProto getProto() {
+    // if builder is null build() returns null, calling getBuilder() to
+    // instantiate builder
+    this.translator.getBuilder();
+    return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+    this.translator.readInstance(base64String);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesResponsePBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesResponsePBImpl.java
new file mode 100644
index 0000000..5acf479
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RefreshMountTableEntriesResponsePBImpl.java
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto.Builder;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProtoOrBuilder;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * RefreshMountTableEntriesResponse.
+ */
+public class RefreshMountTableEntriesResponsePBImpl
+    extends RefreshMountTableEntriesResponse implements PBRecord {
+
+  private FederationProtocolPBTranslator<RefreshMountTableEntriesResponseProto,
+      Builder, RefreshMountTableEntriesResponseProtoOrBuilder> translator =
+          new FederationProtocolPBTranslator<>(
+              RefreshMountTableEntriesResponseProto.class);
+
+  public RefreshMountTableEntriesResponsePBImpl() {
+  }
+
+  public RefreshMountTableEntriesResponsePBImpl(
+      RefreshMountTableEntriesResponseProto proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public RefreshMountTableEntriesResponseProto getProto() {
+    return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+    this.translator.readInstance(base64String);
+  }
+
+  @Override
+  public boolean getResult() {
+    return this.translator.getProtoOrBuilder().getResult();
+  };
+
+  @Override
+  public void setResult(boolean result) {
+    this.translator.getBuilder().setResult(result);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/RouterState.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/RouterState.java
index c90abcc..2fe6941 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/RouterState.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/RouterState.java
@@ -88,6 +88,10 @@ public abstract class RouterState extends BaseRecord {
 
   public abstract long getDateStarted();
 
+  public abstract void setAdminAddress(String adminAddress);
+
+  public abstract String getAdminAddress();
+
   /**
    * Get the identifier for the Router. It uses the address.
    *
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/RouterStatePBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/RouterStatePBImpl.java
index 23a61f9..d837386 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/RouterStatePBImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/RouterStatePBImpl.java
@@ -199,4 +199,14 @@ public class RouterStatePBImpl extends RouterState implements PBRecord {
   public long getDateCreated() {
     return this.translator.getProtoOrBuilder().getDateCreated();
   }
+
+  @Override
+  public void setAdminAddress(String adminAddress) {
+    this.translator.getBuilder().setAdminAddress(adminAddress);
+  }
+
+  @Override
+  public String getAdminAddress() {
+    return this.translator.getProtoOrBuilder().getAdminAddress();
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index bdaabe8..27c42cd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -54,6 +54,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -107,7 +109,8 @@ public class RouterAdmin extends Configured implements Tool {
     if (cmd == null) {
       String[] commands =
           {"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
-              "-safemode", "-nameservice", "-getDisabledNameservices"};
+              "-safemode", "-nameservice", "-getDisabledNameservices",
+              "-refresh"};
       StringBuilder usage = new StringBuilder();
       usage.append("Usage: hdfs dfsrouteradmin :\n");
       for (int i = 0; i < commands.length; i++) {
@@ -142,6 +145,8 @@ public class RouterAdmin extends Configured implements Tool {
       return "\t[-nameservice enable | disable <nameservice>]";
     } else if (cmd.equals("-getDisabledNameservices")) {
       return "\t[-getDisabledNameservices]";
+    } else if (cmd.equals("-refresh")) {
+      return "\t[-refresh]";
     }
     return getUsage(null);
   }
@@ -230,9 +235,10 @@ public class RouterAdmin extends Configured implements Tool {
       printUsage(cmd);
       return exitCode;
     }
+    String address = null;
     // Initialize RouterClient
     try {
-      String address = getConf().getTrimmed(
+      address = getConf().getTrimmed(
           RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY,
           RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT);
       InetSocketAddress routerSocket = NetUtils.createSocketAddr(address);
@@ -302,6 +308,8 @@ public class RouterAdmin extends Configured implements Tool {
         manageNameservice(subcmd, nsId);
       } else if ("-getDisabledNameservices".equals(cmd)) {
         getDisabledNameservices();
+      } else if ("-refresh".equals(cmd)) {
+        refresh(address);
       } else {
         throw new IllegalArgumentException("Unknown Command: " + cmd);
       }
@@ -337,6 +345,27 @@ public class RouterAdmin extends Configured implements Tool {
     return exitCode;
   }
 
+  private void refresh(String address) throws IOException {
+    if (refreshRouterCache()) {
+      System.out.println(
+          "Successfully updated mount table cache on router " + address);
+    }
+  }
+
+  /**
+   * Refresh mount table cache on connected router.
+   *
+   * @return true if cache refreshed successfully
+   * @throws IOException
+   */
+  private boolean refreshRouterCache() throws IOException {
+    RefreshMountTableEntriesResponse response =
+        client.getMountTableManager().refreshMountTableEntries(
+            RefreshMountTableEntriesRequest.newInstance());
+    return response.getResult();
+  }
+
+
   /**
    * Add a mount table entry or update if it exists.
    *
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
index b1a62b1..17ae299 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
@@ -193,6 +193,7 @@ message RouterRecordProto {
   optional string version = 6;
   optional string compileInfo = 7;
   optional uint64 dateStarted = 8;
+  optional string adminAddress = 9;
 }
 
 message GetRouterRegistrationRequestProto {
@@ -219,6 +220,13 @@ message RouterHeartbeatResponseProto {
   optional bool status = 1;
 }
 
+message RefreshMountTableEntriesRequestProto {
+}
+
+message RefreshMountTableEntriesResponseProto {
+  optional bool result = 1;
+}
+
 /////////////////////////////////////////////////
 // Route State
 /////////////////////////////////////////////////
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
index f3a2b6e..34a012a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
@@ -74,4 +74,9 @@ service RouterAdminProtocolService {
    * Get the list of disabled name services.
    */
   rpc getDisabledNameservices(GetDisabledNameservicesRequestProto) returns (GetDisabledNameservicesResponseProto);
+
+  /**
+   * Refresh mount entries
+   */
+  rpc refreshMountTableEntries(RefreshMountTableEntriesRequestProto) returns(RefreshMountTableEntriesResponseProto);
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index afb3c32..72f6c2f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -547,4 +547,38 @@
     </description>
   </property>
 
+  <property>
+    <name>dfs.federation.router.mount-table.cache.update</name>
+    <value>false</value>
+    <description>Set true to enable MountTableRefreshService. This service
+      updates mount table cache immediately after adding, modifying or
+      deleting the mount table entries. If this service is not enabled
+      mount table cache are refreshed periodically by
+      StateStoreCacheUpdateService
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.mount-table.cache.update.timeout</name>
+    <value>1m</value>
+    <description>This property defines how long to wait for all the
+      admin servers to finish their mount table cache update. This setting
+      supports multiple time unit suffixes as described in
+      dfs.federation.router.safemode.extension.
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.mount-table.cache.update.client.max.time
+    </name>
+    <value>5m</value>
+    <description>Remote router mount table cache is updated through
+      RouterClient(RPC call). To improve performance, RouterClient
+      connections are cached but it should not be kept in cache forever.
+      This property defines the max time a connection can be cached. This
+      setting supports multiple time unit suffixes as described in
+      dfs.federation.router.safemode.extension.
+    </description>
+  </property>
+
 </configuration>
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 72bf6af..adc4383 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -230,6 +230,12 @@ Ls command will show below information for each mount table entry:
     Source                    Destinations              Owner                     Group                     Mode                      Quota/Usage
     /path                     ns0->/path                root                      supergroup                rwxr-xr-x                 [NsQuota: 50/0, SsQuota: 100 B/0 B]
 
+Mount table cache is refreshed periodically but it can also be refreshed by executing refresh command:
+
+    [hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -refresh
+
+The above command will refresh cache of the connected router. This command is redundant when mount table refresh service is enabled as the service will always keep the cache updated.
+
 #### Multiple subclusters
 A mount point also supports mapping multiple subclusters.
 For example, to create a mount point that stores files in subclusters `ns1` and `ns2`.
@@ -380,6 +386,9 @@ The connection to the State Store and the internal caching at the Router.
 | dfs.federation.router.store.connection.test | 60000 | How often to check for the connection to the State Store in milliseconds. |
 | dfs.federation.router.cache.ttl | 60000 | How often to refresh the State Store caches in milliseconds. |
 | dfs.federation.router.store.membership.expiration | 300000 | Expiration time in milliseconds for a membership record. |
+| dfs.federation.router.mount-table.cache.update | false | If true, Mount table cache is updated whenever a mount table entry is added, modified or removed for all the routers. |
+| dfs.federation.router.mount-table.cache.update.timeout | 1m | Max time to wait for all the routers to finish their mount table cache update. |
+| dfs.federation.router.mount-table.cache.update.client.max.time | 5m | Max time a RouterClient connection can be cached. |
 
 ### Routing
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
index c48e6e2..5095c6b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
@@ -56,6 +56,8 @@ import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
 import org.apache.hadoop.hdfs.server.namenode.ha.HAContext;
+import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
+import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.test.GenericTestUtils;
@@ -316,4 +318,29 @@ public final class FederationTestUtils {
     }).when(spyHAContext).checkOperation(any(OperationCategory.class));
     Whitebox.setInternalState(namesystem, "haContext", spyHAContext);
   }
+
+  /**
+   * Wait for a number of routers to be registered in state store.
+   *
+   * @param stateManager number of routers to be registered.
+   * @param routerCount number of routers to be registered.
+   * @param tiemout max wait time in ms
+   */
+  public static void waitRouterRegistered(RouterStore stateManager,
+      long routerCount, int timeout) throws Exception {
+    GenericTestUtils.waitFor(new Supplier<Boolean>() {
+      @Override
+      public Boolean get() {
+        try {
+          List<RouterState> cachedRecords = stateManager.getCachedRecords();
+          if (cachedRecords.size() == routerCount) {
+            return true;
+          }
+        } catch (IOException e) {
+          // Ignore
+        }
+        return false;
+      }
+    }, 100, timeout);
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java
index be0de52..6d9b2c0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/RouterConfigBuilder.java
@@ -38,6 +38,7 @@ public class RouterConfigBuilder {
   private boolean enableMetrics = false;
   private boolean enableQuota = false;
   private boolean enableSafemode = false;
+  private boolean enableCacheRefresh;
 
   public RouterConfigBuilder(Configuration configuration) {
     this.conf = configuration;
@@ -104,6 +105,11 @@ public class RouterConfigBuilder {
     return this;
   }
 
+  public RouterConfigBuilder refreshCache(boolean enable) {
+    this.enableCacheRefresh = enable;
+    return this;
+  }
+
   public RouterConfigBuilder rpc() {
     return this.rpc(true);
   }
@@ -140,6 +146,10 @@ public class RouterConfigBuilder {
     return this.safemode(true);
   }
 
+  public RouterConfigBuilder refreshCache() {
+    return this.refreshCache(true);
+  }
+
   public Configuration build() {
     conf.setBoolean(RBFConfigKeys.DFS_ROUTER_STORE_ENABLE,
         this.enableStateStore);
@@ -158,6 +168,8 @@ public class RouterConfigBuilder {
         this.enableQuota);
     conf.setBoolean(RBFConfigKeys.DFS_ROUTER_SAFEMODE_ENABLE,
         this.enableSafemode);
+    conf.setBoolean(RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE,
+        this.enableCacheRefresh);
     return conf;
   }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index d0e3e50..445022b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -799,6 +799,28 @@ public class TestRouterAdminCLI {
     assertTrue(err.toString().contains("No arguments allowed"));
   }
 
+  @Test
+  public void testRefreshMountTableCache() throws Exception {
+    String src = "/refreshMount";
+
+    // create mount table entry
+    String[] argv = new String[] {"-add", src, "refreshNS0", "/refreshDest"};
+    assertEquals(0, ToolRunner.run(admin, argv));
+
+    // refresh the mount table entry cache
+    System.setOut(new PrintStream(out));
+    argv = new String[] {"-refresh"};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    assertTrue(
+        out.toString().startsWith("Successfully updated mount table cache"));
+
+    // Now ls should return that mount table entry
+    out.reset();
+    argv = new String[] {"-ls", src};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    assertTrue(out.toString().contains(src));
+  }
+
   /**
    * Wait for the Router transforming to expected state.
    * @param expectedState Expected Router state.
@@ -836,8 +858,7 @@ public class TestRouterAdminCLI {
   }
 
   @Test
-  public void testUpdateDestinationForExistingMountTable() throws
-  Exception {
+  public void testUpdateDestinationForExistingMountTable() throws Exception {
     // Add a mount table firstly
     String nsId = "ns0";
     String src = "/test-updateDestinationForExistingMountTable";
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
new file mode 100644
index 0000000..c90e614
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java
@@ -0,0 +1,396 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.curator.test.TestingServer;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hdfs.server.federation.FederationTestUtils;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.service.Service.STATE;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Time;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This test class verifies that mount table cache is updated on all the routers
+ * when MountTableRefreshService is enabled and there is a change in mount table
+ * entries.
+ */
+public class TestRouterMountTableCacheRefresh {
+  private static TestingServer curatorTestingServer;
+  private static MiniRouterDFSCluster cluster;
+  private static RouterContext routerContext;
+  private static MountTableManager mountTableManager;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+    curatorTestingServer = new TestingServer();
+    curatorTestingServer.start();
+    final String connectString = curatorTestingServer.getConnectString();
+    int numNameservices = 2;
+    cluster = new MiniRouterDFSCluster(false, numNameservices);
+    Configuration conf = new RouterConfigBuilder().refreshCache().admin().rpc()
+        .heartbeat().build();
+    conf.setClass(RBFConfigKeys.FEDERATION_FILE_RESOLVER_CLIENT_CLASS,
+        RBFConfigKeys.FEDERATION_FILE_RESOLVER_CLIENT_CLASS_DEFAULT,
+        FileSubclusterResolver.class);
+    conf.set(CommonConfigurationKeys.ZK_ADDRESS, connectString);
+    conf.setBoolean(RBFConfigKeys.DFS_ROUTER_STORE_ENABLE, true);
+    cluster.addRouterOverrides(conf);
+    cluster.startCluster();
+    cluster.startRouters();
+    cluster.waitClusterUp();
+    routerContext = cluster.getRandomRouter();
+    RouterStore routerStateManager =
+        routerContext.getRouter().getRouterStateManager();
+    mountTableManager = routerContext.getAdminClient().getMountTableManager();
+    // wait for one minute for all the routers to get registered
+    FederationTestUtils.waitRouterRegistered(routerStateManager,
+        numNameservices, 60000);
+  }
+
+  @AfterClass
+  public static void destory() {
+    try {
+      curatorTestingServer.close();
+      cluster.shutdown();
+    } catch (IOException e) {
+      // do nothing
+    }
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    clearEntries();
+  }
+
+  private void clearEntries() throws IOException {
+    List<MountTable> result = getMountTableEntries();
+    for (MountTable mountTable : result) {
+      RemoveMountTableEntryResponse removeMountTableEntry =
+          mountTableManager.removeMountTableEntry(RemoveMountTableEntryRequest
+              .newInstance(mountTable.getSourcePath()));
+      assertTrue(removeMountTableEntry.getStatus());
+    }
+  }
+
+  /**
+   * addMountTableEntry API should internally update the cache on all the
+   * routers.
+   */
+  @Test
+  public void testMountTableEntriesCacheUpdatedAfterAddAPICall()
+      throws IOException {
+
+    // Existing mount table size
+    int existingEntriesCount = getNumMountTableEntries();
+    String srcPath = "/addPath";
+    MountTable newEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap("ns0", "/addPathDest"), Time.now(),
+        Time.now());
+    addMountTableEntry(mountTableManager, newEntry);
+
+    // When Add entry is done, all the routers must have updated its mount table
+    // entry
+    List<RouterContext> routers = getRouters();
+    for (RouterContext rc : routers) {
+      List<MountTable> result =
+          getMountTableEntries(rc.getAdminClient().getMountTableManager());
+      assertEquals(1 + existingEntriesCount, result.size());
+      MountTable mountTableResult = result.get(0);
+      assertEquals(srcPath, mountTableResult.getSourcePath());
+    }
+  }
+
+  /**
+   * removeMountTableEntry API should internally update the cache on all the
+   * routers.
+   */
+  @Test
+  public void testMountTableEntriesCacheUpdatedAfterRemoveAPICall()
+      throws IOException {
+    // add
+    String srcPath = "/removePathSrc";
+    MountTable newEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap("ns0", "/removePathDest"), Time.now(),
+        Time.now());
+    addMountTableEntry(mountTableManager, newEntry);
+    int addCount = getNumMountTableEntries();
+    assertEquals(1, addCount);
+
+    // remove
+    RemoveMountTableEntryResponse removeMountTableEntry =
+        mountTableManager.removeMountTableEntry(
+            RemoveMountTableEntryRequest.newInstance(srcPath));
+    assertTrue(removeMountTableEntry.getStatus());
+
+    int removeCount = getNumMountTableEntries();
+    assertEquals(addCount - 1, removeCount);
+  }
+
+  /**
+   * updateMountTableEntry API should internally update the cache on all the
+   * routers.
+   */
+  @Test
+  public void testMountTableEntriesCacheUpdatedAfterUpdateAPICall()
+      throws IOException {
+    // add
+    String srcPath = "/updatePathSrc";
+    MountTable newEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap("ns0", "/updatePathDest"), Time.now(),
+        Time.now());
+    addMountTableEntry(mountTableManager, newEntry);
+    int addCount = getNumMountTableEntries();
+    assertEquals(1, addCount);
+
+    // update
+    String key = "ns1";
+    String value = "/updatePathDest2";
+    MountTable upateEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap(key, value), Time.now(), Time.now());
+    UpdateMountTableEntryResponse updateMountTableEntry =
+        mountTableManager.updateMountTableEntry(
+            UpdateMountTableEntryRequest.newInstance(upateEntry));
+    assertTrue(updateMountTableEntry.getStatus());
+    MountTable updatedMountTable = getMountTableEntry(srcPath);
+    assertNotNull("Updated mount table entrty cannot be null",
+        updatedMountTable);
+    assertEquals(1, updatedMountTable.getDestinations().size());
+    assertEquals(key,
+        updatedMountTable.getDestinations().get(0).getNameserviceId());
+    assertEquals(value, updatedMountTable.getDestinations().get(0).getDest());
+  }
+
+  /**
+   * After caching RouterClient if router goes down, refresh should be
+   * successful on other available router. The router which is not running
+   * should be ignored.
+   */
+  @Test
+  public void testCachedRouterClientBehaviourAfterRouterStoped()
+      throws IOException {
+    String srcPath = "/addPathClientCache";
+    MountTable newEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap("ns0", "/addPathClientCacheDest"), Time.now(),
+        Time.now());
+    addMountTableEntry(mountTableManager, newEntry);
+
+    // When Add entry is done, all the routers must have updated its mount table
+    // entry
+    List<RouterContext> routers = getRouters();
+    for (RouterContext rc : routers) {
+      List<MountTable> result =
+          getMountTableEntries(rc.getAdminClient().getMountTableManager());
+      assertEquals(1, result.size());
+      MountTable mountTableResult = result.get(0);
+      assertEquals(srcPath, mountTableResult.getSourcePath());
+    }
+
+    // Lets stop one router
+    for (RouterContext rc : routers) {
+      InetSocketAddress adminServerAddress =
+          rc.getRouter().getAdminServerAddress();
+      if (!routerContext.getRouter().getAdminServerAddress()
+          .equals(adminServerAddress)) {
+        cluster.stopRouter(rc);
+        break;
+      }
+    }
+
+    srcPath = "/addPathClientCache2";
+    newEntry = MountTable.newInstance(srcPath,
+        Collections.singletonMap("ns0", "/addPathClientCacheDest2"), Time.now(),
+        Time.now());
+    addMountTableEntry(mountTableManager, newEntry);
+    for (RouterContext rc : getRouters()) {
+      List<MountTable> result =
+          getMountTableEntries(rc.getAdminClient().getMountTableManager());
+      assertEquals(2, result.size());
+    }
+  }
+
+  private List<RouterContext> getRouters() {
+    List<RouterContext> result = new ArrayList<>();
+    for (RouterContext rc : cluster.getRouters()) {
+      if (rc.getRouter().getServiceState() == STATE.STARTED) {
+        result.add(rc);
+      }
+    }
+    return result;
+  }
+
+  @Test
+  public void testRefreshMountTableEntriesAPI() throws IOException {
+    RefreshMountTableEntriesRequest request =
+        RefreshMountTableEntriesRequest.newInstance();
+    RefreshMountTableEntriesResponse refreshMountTableEntriesRes =
+        mountTableManager.refreshMountTableEntries(request);
+    // refresh should be successful
+    assertTrue(refreshMountTableEntriesRes.getResult());
+  }
+
+  /**
+   * Verify cache update timeouts when any of the router takes more time than
+   * the configured timeout period.
+   */
+  @Test(timeout = 10000)
+  public void testMountTableEntriesCacheUpdateTimeout() throws IOException {
+    // Resources will be closed when router is closed
+    @SuppressWarnings("resource")
+    MountTableRefresherService mountTableRefresherService =
+        new MountTableRefresherService(routerContext.getRouter()) {
+          @Override
+          protected MountTableRefresherThread getLocalRefresher(
+              String adminAddress) {
+            return new MountTableRefresherThread(null, adminAddress) {
+              @Override
+              public void run() {
+                try {
+                  // Sleep 1 minute
+                  Thread.sleep(60000);
+                } catch (InterruptedException e) {
+                  // Do nothing
+                }
+              }
+            };
+          }
+        };
+    Configuration config = routerContext.getRouter().getConfig();
+    config.setTimeDuration(RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_TIMEOUT, 5,
+        TimeUnit.SECONDS);
+    mountTableRefresherService.init(config);
+    // One router is not responding for 1 minute, still refresh should
+    // finished in 5 second as cache update timeout is set 5 second.
+    mountTableRefresherService.refresh();
+    // Test case timeout is assert for this test case.
+  }
+
+  /**
+   * Verify Cached RouterClient connections are removed from cache and closed
+   * when their max live time is elapsed.
+   */
+  @Test
+  public void testRouterClientConnectionExpiration() throws Exception {
+    final AtomicInteger createCounter = new AtomicInteger();
+    final AtomicInteger removeCounter = new AtomicInteger();
+    // Resources will be closed when router is closed
+    @SuppressWarnings("resource")
+    MountTableRefresherService mountTableRefresherService =
+        new MountTableRefresherService(routerContext.getRouter()) {
+          @Override
+          protected void closeRouterClient(RouterClient client) {
+            super.closeRouterClient(client);
+            removeCounter.incrementAndGet();
+          }
+
+          @Override
+          protected RouterClient createRouterClient(
+              InetSocketAddress routerSocket, Configuration config)
+              throws IOException {
+            createCounter.incrementAndGet();
+            return super.createRouterClient(routerSocket, config);
+          }
+        };
+    int clientCacheTime = 2000;
+    Configuration config = routerContext.getRouter().getConfig();
+    config.setTimeDuration(
+        RBFConfigKeys.MOUNT_TABLE_CACHE_UPDATE_CLIENT_MAX_TIME, clientCacheTime,
+        TimeUnit.MILLISECONDS);
+    mountTableRefresherService.init(config);
+    // Do refresh to created RouterClient
+    mountTableRefresherService.refresh();
+    assertNotEquals("No RouterClient is created.", 0, createCounter.get());
+    /*
+     * Wait for clients to expire. Lets wait triple the cache eviction period.
+     * After cache eviction period all created client must be removed and
+     * closed.
+     */
+    GenericTestUtils.waitFor(() -> createCounter.get() == removeCounter.get(),
+        100, 3 * clientCacheTime);
+  }
+
+  private int getNumMountTableEntries() throws IOException {
+    List<MountTable> records = getMountTableEntries();
+    int oldEntriesCount = records.size();
+    return oldEntriesCount;
+  }
+
+  private MountTable getMountTableEntry(String srcPath) throws IOException {
+    List<MountTable> mountTableEntries = getMountTableEntries();
+    for (MountTable mountTable : mountTableEntries) {
+      String sourcePath = mountTable.getSourcePath();
+      if (srcPath.equals(sourcePath)) {
+        return mountTable;
+      }
+    }
+    return null;
+  }
+
+  private void addMountTableEntry(MountTableManager mountTableMgr,
+      MountTable newEntry) throws IOException {
+    AddMountTableEntryRequest addRequest =
+        AddMountTableEntryRequest.newInstance(newEntry);
+    AddMountTableEntryResponse addResponse =
+        mountTableMgr.addMountTableEntry(addRequest);
+    assertTrue(addResponse.getStatus());
+  }
+
+  private List<MountTable> getMountTableEntries() throws IOException {
+    return getMountTableEntries(mountTableManager);
+  }
+
+  private List<MountTable> getMountTableEntries(
+      MountTableManager mountTableManagerParam) throws IOException {
+    GetMountTableEntriesRequest request =
+        GetMountTableEntriesRequest.newInstance("/");
+    return mountTableManagerParam.getMountTableEntries(request).getEntries();
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 0ba9b94..5bfb0cb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -437,6 +437,7 @@ Usage:
           [-safemode enter | leave | get]
           [-nameservice disable | enable <nameservice>]
           [-getDisabledNameservices]
+          [-refresh]
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
@@ -449,6 +450,7 @@ Usage:
 | `-safemode` `enter` `leave` `get` | Manually set the Router entering or leaving safe mode. The option *get* will be used for verifying if the Router is in safe mode state. |
 | `-nameservice` `disable` `enable` *nameservice* | Disable/enable  a name service from the federation. If disabled, requests will not go to that name service. |
 | `-getDisabledNameservices` | Get the name services that are disabled in the federation. |
+| `-refresh` | Update mount table cache of the connected router. |
 
 The commands for managing Router-based federation. See [Mount table management](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Mount_table_management) for more info.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 02/41: HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dca3b2edf2ac77019c9d6c7d76ca35f2f451327c
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Tue Oct 23 14:34:29 2018 +0800

    HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.
---
 .../resolver/FileSubclusterResolver.java           |  6 ++-
 .../federation/router/RouterClientProtocol.java    | 30 +++++++++---
 .../router/RouterQuotaUpdateService.java           |  9 ++--
 .../hdfs/server/federation/MockResolver.java       | 17 +++----
 .../federation/router/TestRouterMountTable.java    | 55 +++++++++++++++++++++-
 .../router/TestRouterRpcMultiDestination.java      |  5 +-
 6 files changed, 97 insertions(+), 25 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
index 5aa5ec9..6432bb0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
@@ -61,8 +61,10 @@ public interface FileSubclusterResolver {
    * cache.
    *
    * @param path Path to get the mount points under.
-   * @return List of mount points present at this path or zero-length list if
-   *         none are found.
+   * @return List of mount points present at this path. Return zero-length
+   *         list if the path is a mount point but there are no mount points
+   *         under the path. Return null if the path is not a mount point
+   *         and there are no mount points under the path.
    * @throws IOException Throws exception if the data is not available.
    */
   List<String> getMountPoints(String path) throws IOException;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 344401f..9e2979b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -720,6 +720,9 @@ public class RouterClientProtocol implements ClientProtocol {
           date = dates.get(src);
         }
         ret = getMountPointStatus(src, children.size(), date);
+      } else if (children != null) {
+        // The src is a mount point, but there are no files or directories
+        ret = getMountPointStatus(src, 0, 0);
       }
     }
 
@@ -1728,13 +1731,26 @@ public class RouterClientProtocol implements ClientProtocol {
     FsPermission permission = FsPermission.getDirDefault();
     String owner = this.superUser;
     String group = this.superGroup;
-    try {
-      // TODO support users, it should be the user for the pointed folder
-      UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
-      owner = ugi.getUserName();
-      group = ugi.getPrimaryGroupName();
-    } catch (IOException e) {
-      LOG.error("Cannot get the remote user: {}", e.getMessage());
+    if (subclusterResolver instanceof MountTableResolver) {
+      try {
+        MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
+        MountTable entry = mountTable.getMountPoint(name);
+        if (entry != null) {
+          permission = entry.getMode();
+          owner = entry.getOwnerName();
+          group = entry.getGroupName();
+        }
+      } catch (IOException e) {
+        LOG.error("Cannot get mount point: {}", e.getMessage());
+      }
+    } else {
+      try {
+        UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
+        owner = ugi.getUserName();
+        group = ugi.getPrimaryGroupName();
+      } catch (IOException e) {
+        LOG.error("Cannot get remote user: {}", e.getMessage());
+      }
     }
     long inodeId = 0;
     return new HdfsFileStatus.Builder()
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
index 4813b53..9bfd705 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
@@ -87,11 +87,12 @@ public class RouterQuotaUpdateService extends PeriodicService {
 
         QuotaUsage currentQuotaUsage = null;
 
-        // Check whether destination path exists in filesystem. If destination
-        // is not present, reset the usage. For other mount entry get current
-        // quota usage
+        // Check whether destination path exists in filesystem. When the
+        // mtime is zero, the destination is not present and reset the usage.
+        // This is because mount table does not have mtime.
+        // For other mount entry get current quota usage
         HdfsFileStatus ret = this.rpcServer.getFileInfo(src);
-        if (ret == null) {
+        if (ret == null || ret.getModificationTime() == 0) {
           currentQuotaUsage = new RouterQuotaUsage.Builder()
               .fileAndDirectoryCount(0)
               .quota(nsQuota)
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
index f5636ce..9bff007 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -303,15 +303,16 @@ public class MockResolver
 
   @Override
   public List<String> getMountPoints(String path) throws IOException {
+    // Mounts only supported under root level
+    if (!path.equals("/")) {
+      return null;
+    }
     List<String> mounts = new ArrayList<>();
-    if (path.equals("/")) {
-      // Mounts only supported under root level
-      for (String mount : this.locations.keySet()) {
-        if (mount.length() > 1) {
-          // Remove leading slash, this is the behavior of the mount tree,
-          // return only names.
-          mounts.add(mount.replace("/", ""));
-        }
+    for (String mount : this.locations.keySet()) {
+      if (mount.length() > 1) {
+        // Remove leading slash, this is the behavior of the mount tree,
+        // return only names.
+        mounts.add(mount.replace("/", ""));
       }
     }
     return mounts;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
index 4d8ffe1..d2b78d3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@@ -43,8 +44,12 @@ import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.util.Time;
+import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -59,9 +64,11 @@ public class TestRouterMountTable {
   private static RouterContext routerContext;
   private static MountTableResolver mountTable;
   private static ClientProtocol routerProtocol;
+  private static long startTime;
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
+    startTime = Time.now();
 
     // Build and start a federated cluster
     cluster = new StateStoreDFSCluster(false, 1);
@@ -92,6 +99,21 @@ public class TestRouterMountTable {
     }
   }
 
+  @After
+  public void clearMountTable() throws IOException {
+    RouterClient client = routerContext.getAdminClient();
+    MountTableManager mountTableManager = client.getMountTableManager();
+    GetMountTableEntriesRequest req1 =
+        GetMountTableEntriesRequest.newInstance("/");
+    GetMountTableEntriesResponse response =
+        mountTableManager.getMountTableEntries(req1);
+    for (MountTable entry : response.getEntries()) {
+      RemoveMountTableEntryRequest req2 =
+          RemoveMountTableEntryRequest.newInstance(entry.getSourcePath());
+      mountTableManager.removeMountTableEntry(req2);
+    }
+  }
+
   @Test
   public void testReadOnly() throws Exception {
 
@@ -157,7 +179,6 @@ public class TestRouterMountTable {
    */
   @Test
   public void testListFilesTime() throws Exception {
-    Long beforeCreatingTime = Time.now();
     // Add mount table entry
     MountTable addEntry = MountTable.newInstance(
         "/testdir", Collections.singletonMap("ns0", "/testdir"));
@@ -211,10 +232,40 @@ public class TestRouterMountTable {
       Long expectedTime = pathModTime.get(currentFile);
 
       assertEquals(currentFile, fileName);
-      assertTrue(currentTime > beforeCreatingTime);
+      assertTrue(currentTime > startTime);
       assertEquals(currentTime, expectedTime);
     }
     // Verify the total number of results found/matched
     assertEquals(pathModTime.size(), listing.getPartialListing().length);
   }
+
+  /**
+   * Verify that the file listing contains correct permission.
+   */
+  @Test
+  public void testMountTablePermissions() throws Exception {
+    // Add mount table entries
+    MountTable addEntry = MountTable.newInstance(
+        "/testdir1", Collections.singletonMap("ns0", "/testdir1"));
+    addEntry.setGroupName("group1");
+    addEntry.setOwnerName("owner1");
+    addEntry.setMode(FsPermission.createImmutable((short)0775));
+    assertTrue(addMountTable(addEntry));
+    addEntry = MountTable.newInstance(
+        "/testdir2", Collections.singletonMap("ns0", "/testdir2"));
+    addEntry.setGroupName("group2");
+    addEntry.setOwnerName("owner2");
+    addEntry.setMode(FsPermission.createImmutable((short)0755));
+    assertTrue(addMountTable(addEntry));
+
+    HdfsFileStatus fs = routerProtocol.getFileInfo("/testdir1");
+    assertEquals("group1", fs.getGroup());
+    assertEquals("owner1", fs.getOwner());
+    assertEquals((short) 0775, fs.getPermission().toShort());
+
+    fs = routerProtocol.getFileInfo("/testdir2");
+    assertEquals("group2", fs.getGroup());
+    assertEquals("owner2", fs.getOwner());
+    assertEquals((short) 0755, fs.getPermission().toShort());
+  }
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
index 7e09760..94b712f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
@@ -123,8 +123,9 @@ public class TestRouterRpcMultiDestination extends TestRouterRpc {
     RouterContext rc = getRouterContext();
     Router router = rc.getRouter();
     FileSubclusterResolver subclusterResolver = router.getSubclusterResolver();
-    for (String mount : subclusterResolver.getMountPoints(path)) {
-      requiredPaths.add(mount);
+    List<String> mountList = subclusterResolver.getMountPoints(path);
+    if (mountList != null) {
+      requiredPaths.addAll(mountList);
     }
 
     // Get files/dirs from the Namenodes


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 01/41: HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8bc2ad2d874fc0e6a681aa587518cb699f3f7b75
Author: Vinayakumar B <vi...@apache.org>
AuthorDate: Fri Oct 12 17:19:55 2018 +0530

    HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena.
---
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  | 102 +++++++++++----------
 .../federation/router/TestRouterAdminCLI.java      |  82 ++++++++++++++---
 2 files changed, 122 insertions(+), 62 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 1aefe4f..4a9cc7a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -151,17 +151,7 @@ public class RouterAdmin extends Configured implements Tool {
    * @param arg List of of command line parameters.
    */
   private void validateMax(String[] arg) {
-    if (arg[0].equals("-rm")) {
-      if (arg.length > 2) {
-        throw new IllegalArgumentException(
-            "Too many arguments, Max=1 argument allowed");
-      }
-    } else if (arg[0].equals("-ls")) {
-      if (arg.length > 2) {
-        throw new IllegalArgumentException(
-            "Too many arguments, Max=1 argument allowed");
-      }
-    } else if (arg[0].equals("-clrQuota")) {
+    if (arg[0].equals("-ls")) {
       if (arg.length > 2) {
         throw new IllegalArgumentException(
             "Too many arguments, Max=1 argument allowed");
@@ -183,63 +173,63 @@ public class RouterAdmin extends Configured implements Tool {
     }
   }
 
-  @Override
-  public int run(String[] argv) throws Exception {
-    if (argv.length < 1) {
-      System.err.println("Not enough parameters specified");
-      printUsage();
-      return -1;
-    }
-
-    int exitCode = -1;
-    int i = 0;
-    String cmd = argv[i++];
-
-    // Verify that we have enough command line parameters
+  /**
+   * Usage: validates the minimum number of arguments for a command.
+   * @param argv List of of command line parameters.
+   * @return true if number of arguments are valid for the command else false.
+   */
+  private boolean validateMin(String[] argv) {
+    String cmd = argv[0];
     if ("-add".equals(cmd)) {
       if (argv.length < 4) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-update".equals(cmd)) {
       if (argv.length < 4) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-rm".equals(cmd)) {
       if (argv.length < 2) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-setQuota".equals(cmd)) {
       if (argv.length < 4) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-clrQuota".equals(cmd)) {
       if (argv.length < 2) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-safemode".equals(cmd)) {
       if (argv.length < 2) {
-        System.err.println("Not enough parameters specified for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     } else if ("-nameservice".equals(cmd)) {
       if (argv.length < 3) {
-        System.err.println("Not enough parameters specificed for cmd " + cmd);
-        printUsage(cmd);
-        return exitCode;
+        return false;
       }
     }
+    return true;
+  }
+
+  @Override
+  public int run(String[] argv) throws Exception {
+    if (argv.length < 1) {
+      System.err.println("Not enough parameters specified");
+      printUsage();
+      return -1;
+    }
+
+    int exitCode = -1;
+    int i = 0;
+    String cmd = argv[i++];
 
+    // Verify that we have enough command line parameters
+    if (!validateMin(argv)) {
+      System.err.println("Not enough parameters specificed for cmd " + cmd);
+      printUsage(cmd);
+      return exitCode;
+    }
     // Initialize RouterClient
     try {
       String address = getConf().getTrimmed(
@@ -273,8 +263,17 @@ public class RouterAdmin extends Configured implements Tool {
           exitCode = -1;
         }
       } else if ("-rm".equals(cmd)) {
-        if (removeMount(argv[i])) {
-          System.out.println("Successfully removed mount point " + argv[i]);
+        while (i < argv.length) {
+          try {
+            if (removeMount(argv[i])) {
+              System.out.println("Successfully removed mount point " + argv[i]);
+            }
+          } catch (IOException e) {
+            exitCode = -1;
+            System.err
+                .println(cmd.substring(1) + ": " + e.getLocalizedMessage());
+          }
+          i++;
         }
       } else if ("-ls".equals(cmd)) {
         if (argv.length > 1) {
@@ -288,9 +287,12 @@ public class RouterAdmin extends Configured implements Tool {
               "Successfully set quota for mount point " + argv[i]);
         }
       } else if ("-clrQuota".equals(cmd)) {
-        if (clrQuota(argv[i])) {
-          System.out.println(
-              "Successfully clear quota for mount point " + argv[i]);
+        while (i < argv.length) {
+          if (clrQuota(argv[i])) {
+            System.out
+                .println("Successfully clear quota for mount point " + argv[i]);
+            i++;
+          }
         }
       } else if ("-safemode".equals(cmd)) {
         manageSafeMode(argv[i]);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 80aca55..6642942 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -342,13 +342,43 @@ public class TestRouterAdminCLI {
     assertEquals(0, ToolRunner.run(admin, argv));
     assertTrue(out.toString().contains(
         "Cannot remove mount point " + invalidPath));
+  }
 
-    // test wrong number of arguments
-    System.setErr(new PrintStream(err));
-    argv = new String[] {"-rm", src, "check" };
-    ToolRunner.run(admin, argv);
-    assertTrue(err.toString()
-        .contains("Too many arguments, Max=1 argument allowed"));
+  @Test
+  public void testMultiArgsRemoveMountTable() throws Exception {
+    String nsId = "ns0";
+    String src1 = "/test-rmmounttable1";
+    String src2 = "/test-rmmounttable2";
+    String dest1 = "/rmmounttable1";
+    String dest2 = "/rmmounttable2";
+    // Adding mount table entries
+    String[] argv = new String[] {"-add", src1, nsId, dest1};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    argv = new String[] {"-add", src2, nsId, dest2};
+    assertEquals(0, ToolRunner.run(admin, argv));
+
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    // Ensure mount table entries added successfully
+    GetMountTableEntriesRequest getRequest =
+        GetMountTableEntriesRequest.newInstance(src1);
+    GetMountTableEntriesResponse getResponse =
+        client.getMountTableManager().getMountTableEntries(getRequest);
+    MountTable mountTable = getResponse.getEntries().get(0);
+    getRequest = GetMountTableEntriesRequest.newInstance(src2);
+    getResponse =
+        client.getMountTableManager().getMountTableEntries(getRequest);
+    assertEquals(src1, mountTable.getSourcePath());
+    mountTable = getResponse.getEntries().get(0);
+    assertEquals(src2, mountTable.getSourcePath());
+    // Remove multiple mount table entries
+    argv = new String[] {"-rm", src1, src2};
+    assertEquals(0, ToolRunner.run(admin, argv));
+
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    // Verify successful deletion of mount table entries
+    getResponse =
+        client.getMountTableManager().getMountTableEntries(getRequest);
+    assertEquals(0, getResponse.getEntries().size());
   }
 
   @Test
@@ -540,6 +570,7 @@ public class TestRouterAdminCLI {
   public void testSetAndClearQuota() throws Exception {
     String nsId = "ns0";
     String src = "/test-QuotaMounttable";
+    String src1 = "/test-QuotaMounttable1";
     String dest = "/QuotaMounttable";
     String[] argv = new String[] {"-add", src, nsId, dest};
     assertEquals(0, ToolRunner.run(admin, argv));
@@ -605,15 +636,42 @@ public class TestRouterAdminCLI {
     assertEquals(HdfsConstants.QUOTA_RESET, quotaUsage.getQuota());
     assertEquals(HdfsConstants.QUOTA_RESET, quotaUsage.getSpaceQuota());
 
+    // verify multi args ClrQuota
+    String dest1 = "/QuotaMounttable1";
+    // Add mount table entries.
+    argv = new String[] {"-add", src, nsId, dest};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    argv = new String[] {"-add", src1, nsId, dest1};
+    assertEquals(0, ToolRunner.run(admin, argv));
+
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    // SetQuota for the added entries
+    argv = new String[] {"-setQuota", src, "-nsQuota", String.valueOf(nsQuota),
+        "-ssQuota", String.valueOf(ssQuota)};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    argv = new String[] {"-setQuota", src1, "-nsQuota",
+        String.valueOf(nsQuota), "-ssQuota", String.valueOf(ssQuota)};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    // Clear quota for the added entries
+    argv = new String[] {"-clrQuota", src, src1};
+    assertEquals(0, ToolRunner.run(admin, argv));
+
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    getResponse =
+        client.getMountTableManager().getMountTableEntries(getRequest);
+
+    // Verify clear quota for the entries
+    for (int i = 0; i < 2; i++) {
+      mountTable = getResponse.getEntries().get(i);
+      quotaUsage = mountTable.getQuota();
+      assertEquals(HdfsConstants.QUOTA_RESET, quotaUsage.getQuota());
+      assertEquals(HdfsConstants.QUOTA_RESET, quotaUsage.getSpaceQuota());
+    }
+
     // verify wrong arguments
     System.setErr(new PrintStream(err));
-    argv = new String[] {"-clrQuota", src, "check"};
-    ToolRunner.run(admin, argv);
-    assertTrue(err.toString(),
-        err.toString().contains("Too many arguments, Max=1 argument allowed"));
-
     argv = new String[] {"-setQuota", src, "check", "check2"};
-    err.reset();
     ToolRunner.run(admin, argv);
     assertTrue(err.toString().contains("Invalid argument : check"));
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 27/41: HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d747fb10b11e787d72bac313919b43e7b4e3d241
Author: Surendra Singh Lilhore <su...@apache.org>
AuthorDate: Wed Jan 16 11:42:17 2019 +0530

    HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.
---
 .../hdfs/protocolPB/RouterAdminProtocol.java       |  34 +++++++
 .../hdfs/protocolPB/RouterPolicyProvider.java      |  52 ++++++++++
 .../router/TestRouterPolicyProvider.java           | 108 +++++++++++++++++++++
 3 files changed, 194 insertions(+)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
new file mode 100644
index 0000000..d885989
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.router.NameserviceManager;
+import org.apache.hadoop.hdfs.server.federation.router.RouterStateManager;
+import org.apache.hadoop.ipc.GenericRefreshProtocol;
+
+/**
+ * Protocol used by routeradmin to communicate with statestore.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Stable
+public interface RouterAdminProtocol extends MountTableManager,
+    RouterStateManager, NameserviceManager, GenericRefreshProtocol {
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java
new file mode 100644
index 0000000..af391ff
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java
@@ -0,0 +1,52 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hdfs.HDFSPolicyProvider;
+import org.apache.hadoop.security.authorize.Service;
+
+/**
+ * {@link HDFSPolicyProvider} for RBF protocols.
+ */
+@InterfaceAudience.Private
+public class RouterPolicyProvider extends HDFSPolicyProvider {
+
+  private static final Service[] RBF_SERVICES = new Service[] {
+      new Service(CommonConfigurationKeys.SECURITY_ROUTER_ADMIN_PROTOCOL_ACL,
+          RouterAdminProtocol.class) };
+
+  private final Service[] services;
+
+  public RouterPolicyProvider() {
+    List<Service> list = new ArrayList<>();
+    list.addAll(Arrays.asList(super.getServices()));
+    list.addAll(Arrays.asList(RBF_SERVICES));
+    services = list.toArray(new Service[list.size()]);
+  }
+
+  @Override
+  public Service[] getServices() {
+    return Arrays.copyOf(services, services.length);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterPolicyProvider.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterPolicyProvider.java
new file mode 100644
index 0000000..36a00e5
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterPolicyProvider.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer;
+
+import static org.junit.Assert.*;
+
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.commons.lang3.ClassUtils;
+import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.security.authorize.Service;
+
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Sets;
+
+/**
+ * Test suite covering RouterPolicyProvider. We expect that it contains a
+ * security policy definition for every RPC protocol used in HDFS. The test
+ * suite works by scanning an RPC server's class to find the protocol interfaces
+ * it implements, and then comparing that to the protocol interfaces covered in
+ * RouterPolicyProvider. This is a parameterized test repeated for multiple HDFS
+ * RPC server classes.
+ */
+@RunWith(Parameterized.class)
+public class TestRouterPolicyProvider {
+  private static final Logger LOG = LoggerFactory.getLogger(
+      TestRouterPolicyProvider.class);
+
+  private static Set<Class<?>> policyProviderProtocols;
+
+  @Rule
+  public TestName testName = new TestName();
+
+  private final Class<?> rpcServerClass;
+
+  @BeforeClass
+  public static void initialize() {
+    Service[] services = new RouterPolicyProvider().getServices();
+    policyProviderProtocols = new HashSet<>(services.length);
+    for (Service service : services) {
+      policyProviderProtocols.add(service.getProtocol());
+    }
+  }
+
+  public TestRouterPolicyProvider(Class<?> rpcServerClass) {
+    this.rpcServerClass = rpcServerClass;
+  }
+
+  @Parameters(name = "protocolsForServer-{0}")
+  public static List<Class<?>[]> data() {
+    return Arrays.asList(new Class<?>[][] {{RouterRpcServer.class},
+        {NameNodeRpcServer.class}, {DataNode.class},
+        {RouterAdminServer.class}});
+  }
+
+  @Test
+  public void testPolicyProviderForServer() {
+    List<?> ifaces = ClassUtils.getAllInterfaces(rpcServerClass);
+    Set<Class<?>> serverProtocols = new HashSet<>(ifaces.size());
+    for (Object obj : ifaces) {
+      Class<?> iface = (Class<?>) obj;
+      if (iface.getSimpleName().endsWith("Protocol")) {
+        serverProtocols.add(iface);
+      }
+    }
+    LOG.info("Running test {} for RPC server {}.  Found server protocols {} "
+        + "and policy provider protocols {}.", testName.getMethodName(),
+        rpcServerClass.getName(), serverProtocols, policyProviderProtocols);
+    assertFalse("Expected to find at least one protocol in server.",
+        serverProtocols.isEmpty());
+    final Set<Class<?>> differenceSet = Sets.difference(serverProtocols,
+        policyProviderProtocols);
+    assertTrue(String.format(
+        "Following protocols for server %s are not defined in " + "%s: %s",
+        rpcServerClass.getName(), RouterPolicyProvider.class.getName(), Arrays
+            .toString(differenceSet.toArray())), differenceSet.isEmpty());
+  }
+}
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 26/41: HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b240f39d78ce563c975038c8274a69f3187c6d83
Author: Surendra Singh Lilhore <su...@apache.org>
AuthorDate: Tue Jan 15 16:40:39 2019 +0530

    HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar.
---
 .../hadoop-common/src/main/conf/hadoop-policy.xml              | 10 ++++++++++
 .../java/org/apache/hadoop/fs/CommonConfigurationKeys.java     |  2 ++
 .../java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java    |  5 +++++
 .../apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java   |  6 +++---
 .../hdfs/server/federation/router/RouterAdminServer.java       | 10 ++++------
 .../hadoop/hdfs/server/federation/router/RouterRpcServer.java  |  4 ++--
 .../apache/hadoop/fs/contract/router/RouterHDFSContract.java   |  4 ++++
 7 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
index bd7c111..e1640f9 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
@@ -110,6 +110,16 @@
   </property>
 
   <property>
+    <name>security.router.admin.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for RouterAdmin Protocol. The ACL is a comma-separated
+    list of user and group names. The user and
+    group list is separated by a blank. For e.g. "alice,bob users,wheel".
+    A special value of "*" means all users are allowed.
+    </description>
+  </property>
+
+  <property>
     <name>security.zkfc.protocol.acl</name>
     <value>*</value>
     <description>ACL for access to the ZK Failover Controller
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 72e5309..8204c0d 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -216,6 +216,8 @@ public class CommonConfigurationKeys extends CommonConfigurationKeysPublic {
   SECURITY_CLIENT_PROTOCOL_ACL = "security.client.protocol.acl";
   public static final String SECURITY_CLIENT_DATANODE_PROTOCOL_ACL =
       "security.client.datanode.protocol.acl";
+  public static final String SECURITY_ROUTER_ADMIN_PROTOCOL_ACL =
+      "security.router.admin.protocol.acl";
   public static final String
   SECURITY_DATANODE_PROTOCOL_ACL = "security.datanode.protocol.acl";
   public static final String
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 6de186a..c449a2e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -92,6 +92,11 @@ public final class HdfsConstants {
    */
   public static final String CLIENT_NAMENODE_PROTOCOL_NAME =
       "org.apache.hadoop.hdfs.protocol.ClientProtocol";
+  /**
+   * Router admin Protocol Names.
+   */
+  public static final String ROUTER_ADMIN_PROTOCOL_NAME =
+      "org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol";
 
   // Timeouts for communicating with DataNode for streaming writes/reads
   public static final int READ_TIMEOUT = 60 * 1000;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
index 96fa794..d308616 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
@@ -19,10 +19,10 @@ package org.apache.hadoop.hdfs.protocolPB;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
 import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.ipc.ProtocolInfo;
 import org.apache.hadoop.security.KerberosInfo;
 import org.apache.hadoop.security.token.TokenInfo;
@@ -35,9 +35,9 @@ import org.apache.hadoop.security.token.TokenInfo;
 @InterfaceAudience.Private
 @InterfaceStability.Stable
 @KerberosInfo(
-    serverPrincipal = DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY)
+    serverPrincipal = RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY)
 @TokenInfo(DelegationTokenSelector.class)
-@ProtocolInfo(protocolName = HdfsConstants.CLIENT_NAMENODE_PROTOCOL_NAME,
+@ProtocolInfo(protocolName = HdfsConstants.ROUTER_ADMIN_PROTOCOL_NAME,
     protocolVersion = 1)
 public interface RouterAdminProtocolPB extends
     RouterAdminProtocolService.BlockingInterface {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 027dd11..e2d944c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -29,16 +29,16 @@ import java.util.Set;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.HDFSPolicyProvider;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
+import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol;
 import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolPB;
 import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslatorPB;
+import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
-import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.store.DisabledNameserviceStore;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreCache;
@@ -66,7 +66,6 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableE
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
-import org.apache.hadoop.ipc.GenericRefreshProtocol;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RPC.Server;
@@ -89,8 +88,7 @@ import com.google.protobuf.BlockingService;
  * router. It is created, started, and stopped by {@link Router}.
  */
 public class RouterAdminServer extends AbstractService
-    implements MountTableManager, RouterStateManager, NameserviceManager,
-    GenericRefreshProtocol {
+    implements RouterAdminProtocol {
 
   private static final Logger LOG =
       LoggerFactory.getLogger(RouterAdminServer.class);
@@ -159,7 +157,7 @@ public class RouterAdminServer extends AbstractService
 
     // Set service-level authorization security policy
     if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
-      this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
+      this.adminServer.refreshServiceAcl(conf, new RouterPolicyProvider());
     }
 
     // The RPC-server port can be ephemeral... ensure we have the correct info
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index ad5980b..0d4f94c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -62,7 +62,6 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.ha.HAServiceProtocol;
 import org.apache.hadoop.hdfs.AddBlockFlag;
 import org.apache.hadoop.hdfs.DFSUtil;
-import org.apache.hadoop.hdfs.HDFSPolicyProvider;
 import org.apache.hadoop.hdfs.inotify.EventBatchList;
 import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
@@ -103,6 +102,7 @@ import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;
 import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB;
 import org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolPB;
 import org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB;
+import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
 import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
@@ -275,7 +275,7 @@ public class RouterRpcServer extends AbstractService
     this.serviceAuthEnabled = conf.getBoolean(
         HADOOP_SECURITY_AUTHORIZATION, false);
     if (this.serviceAuthEnabled) {
-      rpcServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
+      rpcServer.refreshServiceAcl(conf, new RouterPolicyProvider());
     }
 
     // We don't want the server to log the full stack trace for some exceptions
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
index 510cb95..46339a3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
@@ -90,6 +90,10 @@ public class RouterHDFSContract extends HDFSContract {
     return cluster.getCluster();
   }
 
+  public static MiniRouterDFSCluster getRouterCluster() {
+    return cluster;
+  }
+
   public static FileSystem getFileSystem() throws IOException {
     //assumes cluster is not null
     Assert.assertNotNull("cluster not created", cluster);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 16/41: HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5640958138e1d5badf754dcb1c35ffd6ac43ae20
Author: Takanobu Asanuma <ta...@apache.org>
AuthorDate: Sun Dec 16 00:40:51 2018 +0900

    HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena.
---
 .../main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java  | 2 +-
 .../apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 4a9cc7a..bdaabe8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -109,7 +109,7 @@ public class RouterAdmin extends Configured implements Tool {
           {"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
               "-safemode", "-nameservice", "-getDisabledNameservices"};
       StringBuilder usage = new StringBuilder();
-      usage.append("Usage: hdfs routeradmin :\n");
+      usage.append("Usage: hdfs dfsrouteradmin :\n");
       for (int i = 0; i < commands.length; i++) {
         usage.append(getUsage(commands[i]));
         if (i + 1 < commands.length) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 6642942..d0e3e50 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -549,7 +549,7 @@ public class TestRouterAdminCLI {
 
     argv = new String[] {"-Random"};
     assertEquals(-1, ToolRunner.run(admin, argv));
-    String expected = "Usage: hdfs routeradmin :\n"
+    String expected = "Usage: hdfs dfsrouteradmin :\n"
         + "\t[-add <source> <nameservice1, nameservice2, ...> <destination> "
         + "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
         + "-owner <owner> -group <group> -mode <mode>]\n"


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 41/41: HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f476bb1ee946d8d3ed1824d0858de2b1fef60b67
Author: Giovanni Matteo Fumarola <gi...@apache.org>
AuthorDate: Wed Feb 20 11:08:55 2019 -0800

    HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.
---
 .../RouterAdminProtocolServerSideTranslatorPB.java |  22 ++++
 .../RouterAdminProtocolTranslatorPB.java           |  21 +++
 .../metrics/FederationRPCPerformanceMonitor.java   |   8 +-
 .../federation/resolver/MountTableManager.java     |  12 ++
 .../federation/router/RouterAdminServer.java       |  36 ++++++
 .../federation/store/impl/MountTableStoreImpl.java |   7 +
 .../store/protocol/GetDestinationRequest.java      |  57 ++++++++
 .../store/protocol/GetDestinationResponse.java     |  59 +++++++++
 .../impl/pb/GetDestinationRequestPBImpl.java       |  73 +++++++++++
 .../impl/pb/GetDestinationResponsePBImpl.java      |  83 ++++++++++++
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  28 +++-
 .../src/main/proto/FederationProtocol.proto        |   8 ++
 .../src/main/proto/RouterProtocol.proto            |   5 +
 .../src/site/markdown/HDFSRouterFederation.md      |   4 +
 .../federation/router/TestRouterAdminCLI.java      |  64 ++++++++-
 ...erRPCMultipleDestinationMountTableResolver.java | 144 +++++++++++++++++++++
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 17 files changed, 628 insertions(+), 5 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index a31c46d..6f6724e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -31,6 +31,8 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
@@ -54,6 +56,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -76,6 +80,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafe
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeRequestPBImpl;
@@ -298,4 +304,20 @@ public class RouterAdminProtocolServerSideTranslatorPB implements
       throw new ServiceException(e);
     }
   }
+
+  @Override
+  public GetDestinationResponseProto getDestination(
+      RpcController controller, GetDestinationRequestProto request)
+      throws ServiceException {
+    try {
+      GetDestinationRequest req =
+          new GetDestinationRequestPBImpl(request);
+      GetDestinationResponse response = server.getDestination(req);
+      GetDestinationResponsePBImpl responsePB =
+          (GetDestinationResponsePBImpl)response;
+      return responsePB.getProto();
+    } catch (IOException e) {
+      throw new ServiceException(e);
+    }
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
index 1fbb06d..9cdc3c1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolTranslatorPB.java
@@ -32,6 +32,8 @@ import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
 import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
@@ -57,6 +59,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -77,6 +81,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnableNam
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnableNameserviceResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
@@ -288,4 +294,19 @@ public class RouterAdminProtocolTranslatorPB
       throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
     }
   }
+
+  @Override
+  public GetDestinationResponse getDestination(
+      GetDestinationRequest request) throws IOException {
+    GetDestinationRequestPBImpl requestPB =
+        (GetDestinationRequestPBImpl) request;
+    GetDestinationRequestProto proto = requestPB.getProto();
+    try {
+      GetDestinationResponseProto response =
+          rpcProxy.getDestination(null, proto);
+      return new GetDestinationResponsePBImpl(response);
+    } catch (ServiceException e) {
+      throw new IOException(ProtobufHelper.getRemoteException(e).getMessage());
+    }
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
index cbd63de..bae83aa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
@@ -129,7 +129,7 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
   public long proxyOp() {
     PROXY_TIME.set(monotonicNow());
     long processingTime = getProcessingTime();
-    if (processingTime >= 0) {
+    if (metrics != null && processingTime >= 0) {
       metrics.addProcessingTime(processingTime);
     }
     return Thread.currentThread().getId();
@@ -139,7 +139,7 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
   public void proxyOpComplete(boolean success) {
     if (success) {
       long proxyTime = getProxyTime();
-      if (proxyTime >= 0) {
+      if (metrics != null && proxyTime >= 0) {
         metrics.addProxyTime(proxyTime);
       }
     }
@@ -147,7 +147,9 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
 
   @Override
   public void proxyOpFailureStandby() {
-    metrics.incrProxyOpFailureStandby();
+    if (metrics != null) {
+      metrics.incrProxyOpFailureStandby();
+    }
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
index 9a1e416..5ff2e28 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableManager.java
@@ -21,6 +21,8 @@ import java.io.IOException;
 
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
@@ -93,4 +95,14 @@ public interface MountTableManager {
    */
   RefreshMountTableEntriesResponse refreshMountTableEntries(
       RefreshMountTableEntriesRequest request) throws IOException;
+
+  /**
+   * Get the destination subcluster (namespace) of a file/directory.
+   *
+   * @param request Fully populated request object including the file to check.
+   * @return The response including the subcluster where the input file is.
+   * @throws IOException Throws exception if the data store is not initialized.
+   */
+  GetDestinationResponse getDestination(
+      GetDestinationRequest request) throws IOException;
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index e2d944c..a2a5a42 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -23,7 +23,10 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.util.ArrayList;
 import java.util.Collection;
+import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 import com.google.common.base.Preconditions;
@@ -39,6 +42,7 @@ import org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocolServerSideTranslator
 import org.apache.hadoop.hdfs.protocolPB.RouterPolicyProvider;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
 import org.apache.hadoop.hdfs.server.federation.store.DisabledNameserviceStore;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreCache;
@@ -52,6 +56,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -378,6 +384,36 @@ public class RouterAdminServer extends AbstractService
     }
   }
 
+  @Override
+  public GetDestinationResponse getDestination(
+      GetDestinationRequest request) throws IOException {
+    final String src = request.getSrcPath();
+    final List<String> nsIds = new ArrayList<>();
+    RouterRpcServer rpcServer = this.router.getRpcServer();
+    List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, false);
+    RouterRpcClient rpcClient = rpcServer.getRPCClient();
+    RemoteMethod method = new RemoteMethod("getFileInfo",
+        new Class<?>[] {String.class}, new RemoteParam());
+    try {
+      Map<RemoteLocation, HdfsFileStatus> responses =
+          rpcClient.invokeConcurrent(
+              locations, method, false, false, HdfsFileStatus.class);
+      for (RemoteLocation location : locations) {
+        if (responses.get(location) != null) {
+          nsIds.add(location.getNameserviceId());
+        }
+      }
+    } catch (IOException ioe) {
+      LOG.error("Cannot get location for {}: {}",
+          src, ioe.getMessage());
+    }
+    if (nsIds.isEmpty() && !locations.isEmpty()) {
+      String nsId = locations.get(0).getNameserviceId();
+      nsIds.add(nsId);
+    }
+    return GetDestinationResponse.newInstance(nsIds);
+  }
+
   /**
    * Verify if Router set safe mode state correctly.
    * @param isInSafeMode Expected state to be set.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
index 76c7e78..d5e1857 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
@@ -31,6 +31,8 @@ import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
@@ -169,4 +171,9 @@ public class MountTableStoreImpl extends MountTableStore {
     return response;
   }
 
+  @Override
+  public GetDestinationResponse getDestination(
+      GetDestinationRequest request) throws IOException {
+    throw new UnsupportedOperationException("Requires the RouterRpcServer");
+  }
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationRequest.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationRequest.java
new file mode 100644
index 0000000..0d5074b
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationRequest.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API request for getting the destination subcluster of a file.
+ */
+public abstract class GetDestinationRequest {
+
+  public static GetDestinationRequest newInstance()
+      throws IOException {
+    return StateStoreSerializer
+        .newRecord(GetDestinationRequest.class);
+  }
+
+  public static GetDestinationRequest newInstance(String srcPath)
+      throws IOException {
+    GetDestinationRequest request = newInstance();
+    request.setSrcPath(srcPath);
+    return request;
+  }
+
+  public static GetDestinationRequest newInstance(Path srcPath)
+      throws IOException {
+    return newInstance(srcPath.toString());
+  }
+
+  @Public
+  @Unstable
+  public abstract String getSrcPath();
+
+  @Public
+  @Unstable
+  public abstract void setSrcPath(String srcPath);
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationResponse.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationResponse.java
new file mode 100644
index 0000000..534b673
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/GetDestinationResponse.java
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API response for getting the destination subcluster of a file.
+ */
+public abstract class GetDestinationResponse {
+
+  public static GetDestinationResponse newInstance()
+      throws IOException {
+    return StateStoreSerializer
+        .newRecord(GetDestinationResponse.class);
+  }
+
+  public static GetDestinationResponse newInstance(
+      Collection<String> nsIds) throws IOException {
+    GetDestinationResponse request = newInstance();
+    request.setDestinations(nsIds);
+    return request;
+  }
+
+  @Public
+  @Unstable
+  public abstract Collection<String> getDestinations();
+
+  @Public
+  @Unstable
+  public void setDestination(String nsId) {
+    setDestinations(Collections.singletonList(nsId));
+  }
+
+  @Public
+  @Unstable
+  public abstract void setDestinations(Collection<String> nsIds);
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationRequestPBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationRequestPBImpl.java
new file mode 100644
index 0000000..b97f455
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationRequestPBImpl.java
@@ -0,0 +1,73 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProtoOrBuilder;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto.Builder;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * GetDestinationRequest.
+ */
+public class GetDestinationRequestPBImpl extends GetDestinationRequest
+    implements PBRecord {
+
+  private FederationProtocolPBTranslator<GetDestinationRequestProto,
+      Builder, GetDestinationRequestProtoOrBuilder> translator =
+          new FederationProtocolPBTranslator<>(
+              GetDestinationRequestProto.class);
+
+  public GetDestinationRequestPBImpl() {
+  }
+
+  public GetDestinationRequestPBImpl(GetDestinationRequestProto proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public GetDestinationRequestProto getProto() {
+    return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+    this.translator.readInstance(base64String);
+  }
+
+  @Override
+  public String getSrcPath() {
+    return this.translator.getProtoOrBuilder().getSrcPath();
+  }
+
+  @Override
+  public void setSrcPath(String path) {
+    this.translator.getBuilder().setSrcPath(path);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationResponsePBImpl.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationResponsePBImpl.java
new file mode 100644
index 0000000..f758f99
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/GetDestinationResponsePBImpl.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto.Builder;
+import org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProtoOrBuilder;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * GetDestinationResponse.
+ */
+public class GetDestinationResponsePBImpl
+    extends GetDestinationResponse implements PBRecord {
+
+  private FederationProtocolPBTranslator<GetDestinationResponseProto,
+      Builder, GetDestinationResponseProtoOrBuilder> translator =
+          new FederationProtocolPBTranslator<>(
+              GetDestinationResponseProto.class);
+
+  public GetDestinationResponsePBImpl() {
+  }
+
+  public GetDestinationResponsePBImpl(
+      GetDestinationResponseProto proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public GetDestinationResponseProto getProto() {
+    // if builder is null build() returns null, calling getBuilder() to
+    // instantiate builder
+    this.translator.getBuilder();
+    return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+    this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+    this.translator.readInstance(base64String);
+  }
+
+  @Override
+  public Collection<String> getDestinations() {
+    return new ArrayList<>(
+        this.translator.getProtoOrBuilder().getDestinationsList());
+  }
+
+  @Override
+  public void setDestinations(Collection<String> nsIds) {
+    this.translator.getBuilder().clearDestinations();
+    for (String nsId : nsIds) {
+      this.translator.getBuilder().addDestinations(nsId);
+    }
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 37aad88..b04b069 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -52,6 +52,8 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -117,7 +119,8 @@ public class RouterAdmin extends Configured implements Tool {
   private String getUsage(String cmd) {
     if (cmd == null) {
       String[] commands =
-          {"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
+          {"-add", "-update", "-rm", "-ls", "-getDestination",
+              "-setQuota", "-clrQuota",
               "-safemode", "-nameservice", "-getDisabledNameservices",
               "-refresh"};
       StringBuilder usage = new StringBuilder();
@@ -143,6 +146,8 @@ public class RouterAdmin extends Configured implements Tool {
       return "\t[-rm <source>]";
     } else if (cmd.equals("-ls")) {
       return "\t[-ls <path>]";
+    } else if (cmd.equals("-getDestination")) {
+      return "\t[-getDestination <path>]";
     } else if (cmd.equals("-setQuota")) {
       return "\t[-setQuota <path> -nsQuota <nsQuota> -ssQuota "
           + "<quota in bytes or quota size string>]";
@@ -172,6 +177,11 @@ public class RouterAdmin extends Configured implements Tool {
         throw new IllegalArgumentException(
             "Too many arguments, Max=1 argument allowed");
       }
+    } else if (arg[0].equals("-getDestination")) {
+      if (arg.length > 2) {
+        throw new IllegalArgumentException(
+            "Too many arguments, Max=1 argument allowed only");
+      }
     } else if (arg[0].equals("-safemode")) {
       if (arg.length > 2) {
         throw new IllegalArgumentException(
@@ -208,6 +218,10 @@ public class RouterAdmin extends Configured implements Tool {
       if (argv.length < 2) {
         return false;
       }
+    } else if ("-getDestination".equals(cmd)) {
+      if (argv.length < 2) {
+        return false;
+      }
     } else if ("-setQuota".equals(cmd)) {
       if (argv.length < 4) {
         return false;
@@ -302,6 +316,8 @@ public class RouterAdmin extends Configured implements Tool {
         } else {
           listMounts("/");
         }
+      } else if ("-getDestination".equals(cmd)) {
+        getDestination(argv[i]);
       } else if ("-setQuota".equals(cmd)) {
         if (setQuota(argv, i)) {
           System.out.println(
@@ -709,6 +725,16 @@ public class RouterAdmin extends Configured implements Tool {
     }
   }
 
+  private void getDestination(String path) throws IOException {
+    path = normalizeFileSystemPath(path);
+    MountTableManager mountTable = client.getMountTableManager();
+    GetDestinationRequest request =
+        GetDestinationRequest.newInstance(path);
+    GetDestinationResponse response = mountTable.getDestination(request);
+    System.out.println("Destination: " +
+        StringUtils.join(",", response.getDestinations()));
+  }
+
   /**
    * Set quota for a mount table entry.
    *
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
index 1e5e37b..9e9fd48 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/FederationProtocol.proto
@@ -175,6 +175,14 @@ message GetMountTableEntriesResponseProto {
   optional uint64 timestamp = 2;
 }
 
+message GetDestinationRequestProto {
+  optional string srcPath = 1;
+}
+
+message  GetDestinationResponseProto {
+  repeated string destinations = 1;
+}
+
 
 /////////////////////////////////////////////////
 // Routers
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
index 34a012a..d6aff49 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/proto/RouterProtocol.proto
@@ -79,4 +79,9 @@ service RouterAdminProtocolService {
    * Refresh mount entries
    */
   rpc refreshMountTableEntries(RefreshMountTableEntriesRequestProto) returns(RefreshMountTableEntriesResponseProto);
+
+  /**
+   * Get the destination of a file/directory in the federation.
+   */
+  rpc getDestination(GetDestinationRequestProto) returns (GetDestinationResponseProto);
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 2ae0c2b..f24ff12 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -261,6 +261,10 @@ RANDOM can be used for reading and writing data from/into different subclusters.
 The common use for this approach is to have the same data in multiple subclusters and balance the reads across subclusters.
 For example, if thousands of containers need to read the same data (e.g., a library), one can use RANDOM to read the data from any of the subclusters.
 
+To determine which subcluster contains a file:
+
+    [hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -getDestination /user/user1/file.txt
+
 Note that consistency of the data across subclusters is not guaranteed by the Router.
 
 ### Disabling nameservices
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index ab733dd..9f53dd4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -26,6 +26,11 @@ import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 import java.net.InetSocketAddress;
 import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
@@ -36,6 +41,8 @@ import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
 import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
 import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
@@ -78,7 +85,8 @@ public class TestRouterAdminCLI {
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
-    cluster = new StateStoreDFSCluster(false, 1);
+    cluster = new StateStoreDFSCluster(false, 1,
+        MultipleDestinationMountTableResolver.class);
     // Build and start a router with State Store + admin + RPC
     Configuration conf = new RouterConfigBuilder()
         .stateStore()
@@ -550,6 +558,11 @@ public class TestRouterAdminCLI {
         .contains("\t[-nameservice enable | disable <nameservice>]"));
     out.reset();
 
+    argv = new String[] {"-getDestination"};
+    assertEquals(-1, ToolRunner.run(admin, argv));
+    assertTrue(out.toString().contains("\t[-getDestination <path>]"));
+    out.reset();
+
     argv = new String[] {"-Random"};
     assertEquals(-1, ToolRunner.run(admin, argv));
     String expected = "Usage: hdfs dfsrouteradmin :\n"
@@ -560,6 +573,7 @@ public class TestRouterAdminCLI {
         + "<destination> " + "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
         + "-owner <owner> -group <group> -mode <mode>]\n" + "\t[-rm <source>]\n"
         + "\t[-ls <path>]\n"
+        + "\t[-getDestination <path>]\n"
         + "\t[-setQuota <path> -nsQuota <nsQuota> -ssQuota "
         + "<quota in bytes or quota size string>]\n" + "\t[-clrQuota <path>]\n"
         + "\t[-safemode enter | leave | get]\n"
@@ -1091,4 +1105,52 @@ public class TestRouterAdminCLI {
     assertEquals(dest, mountTable.getDestinations().get(0).getDest());
     assertEquals(order, mountTable.getDestOrder());
   }
+
+  @Test
+  public void testGetDestination() throws Exception {
+
+    // Test the basic destination feature
+    System.setOut(new PrintStream(out));
+    String[] argv = new String[] {"-getDestination", "/file.txt"};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    assertEquals("Destination: ns0" + System.lineSeparator(), out.toString());
+
+    // Add a HASH_ALL entry to check the destination changing
+    argv = new String[] {"-add", "/testGetDest", "ns0,ns1",
+        "/testGetDestination",
+        "-order", DestinationOrder.HASH_ALL.toString()};
+    assertEquals(0, ToolRunner.run(admin, argv));
+    stateStore.loadCache(MountTableStoreImpl.class, true);
+    MountTableResolver resolver =
+        (MountTableResolver) router.getSubclusterResolver();
+    resolver.loadCache(true);
+
+    // Files should be distributed across ns0 and ns1
+    Map<String, AtomicInteger> counter = new TreeMap<>();
+    final Pattern p = Pattern.compile("Destination: (.*)");
+    for (int i = 0; i < 10; i++) {
+      out.reset();
+      String filename = "file" + i+ ".txt";
+      argv = new String[] {"-getDestination", "/testGetDest/" + filename};
+      assertEquals(0, ToolRunner.run(admin, argv));
+      String outLine = out.toString();
+      Matcher m = p.matcher(outLine);
+      assertTrue(m.find());
+      String nsId = m.group(1);
+      if (counter.containsKey(nsId)) {
+        counter.get(nsId).getAndIncrement();
+      } else {
+        counter.put(nsId, new AtomicInteger(1));
+      }
+    }
+    assertEquals("Wrong counter size: " + counter, 2, counter.size());
+    assertTrue(counter + " should contain ns0", counter.containsKey("ns0"));
+    assertTrue(counter + " should contain ns1", counter.containsKey("ns1"));
+
+    // Bad cases
+    argv = new String[] {"-getDestination"};
+    assertEquals(-1, ToolRunner.run(admin, argv));
+    argv = new String[] {"-getDestination /file1.txt /file2.txt"};
+    assertEquals(-1, ToolRunner.run(admin, argv));
+  }
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
index 8c15151..46bfff9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
@@ -23,11 +23,19 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
+import java.util.TreeSet;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DFSTestUtil;
@@ -41,8 +49,11 @@ import org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMoun
 import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
@@ -52,6 +63,8 @@ import org.junit.Test;
  * Tests router rpc with multiple destination mount table resolver.
  */
 public class TestRouterRPCMultipleDestinationMountTableResolver {
+  private static final List<String> NS_IDS = Arrays.asList("ns0", "ns1");
+
   private static StateStoreDFSCluster cluster;
   private static RouterContext routerContext;
   private static MountTableResolver resolver;
@@ -391,4 +404,135 @@ public class TestRouterRPCMultipleDestinationMountTableResolver {
 
     return addResponse.getStatus();
   }
+
+  @Test
+  public void testGetDestinationHashAll() throws Exception {
+    testGetDestination(DestinationOrder.HASH_ALL,
+        Arrays.asList("ns1"),
+        Arrays.asList("ns1"),
+        Arrays.asList("ns1", "ns0"));
+  }
+
+  @Test
+  public void testGetDestinationHash() throws Exception {
+    testGetDestination(DestinationOrder.HASH,
+        Arrays.asList("ns1"),
+        Arrays.asList("ns1"),
+        Arrays.asList("ns1"));
+  }
+
+  @Test
+  public void testGetDestinationRandom() throws Exception {
+    testGetDestination(DestinationOrder.RANDOM,
+        null, null, Arrays.asList("ns0", "ns1"));
+  }
+
+  /**
+   * Generic test for getting the destination subcluster.
+   * @param order DestinationOrder of the mount point.
+   * @param expectFileLocation Expected subclusters of a file. null for any.
+   * @param expectNoFileLocation Expected subclusters of a non-existing file.
+   * @param expectDirLocation Expected subclusters of a nested directory.
+   * @throws Exception If the test cannot run.
+   */
+  private void testGetDestination(DestinationOrder order,
+      List<String> expectFileLocation,
+      List<String> expectNoFileLocation,
+      List<String> expectDirLocation) throws Exception {
+    setupOrderMountPath(order);
+
+    RouterClient client = routerContext.getAdminClient();
+    MountTableManager mountTableManager = client.getMountTableManager();
+
+    // If the file exists, it should be in the expected subcluster
+    final String pathFile = "dir/file";
+    final Path pathRouterFile = new Path("/mount", pathFile);
+    final Path pathLocalFile = new Path("/tmp", pathFile);
+    FileStatus fileStatus = routerFs.getFileStatus(pathRouterFile);
+    assertTrue(fileStatus + " should be a file", fileStatus.isFile());
+    GetDestinationResponse respFile = mountTableManager.getDestination(
+        GetDestinationRequest.newInstance(pathRouterFile));
+    if (expectFileLocation != null) {
+      assertEquals(expectFileLocation, respFile.getDestinations());
+      assertPathStatus(expectFileLocation, pathLocalFile, false);
+    } else {
+      Collection<String> dests = respFile.getDestinations();
+      assertPathStatus(dests, pathLocalFile, false);
+    }
+
+    // If the file does not exist, it should give us the expected subclusters
+    final String pathNoFile = "dir/no-file";
+    final Path pathRouterNoFile = new Path("/mount", pathNoFile);
+    final Path pathLocalNoFile = new Path("/tmp", pathNoFile);
+    LambdaTestUtils.intercept(FileNotFoundException.class,
+        () -> routerFs.getFileStatus(pathRouterNoFile));
+    GetDestinationResponse respNoFile = mountTableManager.getDestination(
+        GetDestinationRequest.newInstance(pathRouterNoFile));
+    if (expectNoFileLocation != null) {
+      assertEquals(expectNoFileLocation, respNoFile.getDestinations());
+    }
+    assertPathStatus(Collections.emptyList(), pathLocalNoFile, false);
+
+    // If the folder exists, it should be in the expected subcluster
+    final String pathNestedDir = "dir/dir";
+    final Path pathRouterNestedDir = new Path("/mount", pathNestedDir);
+    final Path pathLocalNestedDir = new Path("/tmp", pathNestedDir);
+    FileStatus dirStatus = routerFs.getFileStatus(pathRouterNestedDir);
+    assertTrue(dirStatus + " should be a directory", dirStatus.isDirectory());
+    GetDestinationResponse respDir = mountTableManager.getDestination(
+        GetDestinationRequest.newInstance(pathRouterNestedDir));
+    assertEqualsCollection(expectDirLocation, respDir.getDestinations());
+    assertPathStatus(expectDirLocation, pathLocalNestedDir, true);
+  }
+
+  /**
+   * Assert that the status of a file in the subcluster is the expected one.
+   * @param expectedLocations Subclusters where the file is expected to exist.
+   * @param path Path of the file/directory to check.
+   * @param isDir If the path is expected to be a directory.
+   * @throws Exception If the file cannot be checked.
+   */
+  private void assertPathStatus(Collection<String> expectedLocations,
+      Path path, boolean isDir) throws Exception {
+    for (String nsId : NS_IDS) {
+      final FileSystem fs = getFileSystem(nsId);
+      if (expectedLocations.contains(nsId)) {
+        assertTrue(path + " should exist in " + nsId, fs.exists(path));
+        final FileStatus status = fs.getFileStatus(path);
+        if (isDir) {
+          assertTrue(path + " should be a directory", status.isDirectory());
+        } else {
+          assertTrue(path + " should be a file", status.isFile());
+        }
+      } else {
+        assertFalse(path + " should not exist in " + nsId, fs.exists(path));
+      }
+    }
+  }
+
+  /**
+   * Assert if two collections are equal without checking the order.
+   * @param col1 First collection to compare.
+   * @param col2 Second collection to compare.
+   */
+  private static void assertEqualsCollection(
+      Collection<String> col1, Collection<String> col2) {
+    assertEquals(new TreeSet<>(col1), new TreeSet<>(col2));
+  }
+
+  /**
+   * Get the filesystem for each subcluster.
+   * @param nsId Identifier of the name space (subcluster).
+   * @return The FileSystem for
+   */
+  private static FileSystem getFileSystem(final String nsId) {
+    if (nsId.equals("ns0")) {
+      return nnFs0;
+    }
+    if (nsId.equals("ns1")) {
+      return nnFs1;
+    }
+    return null;
+  }
+
 }
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index c3f113d..32e88a2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -432,6 +432,7 @@ Usage:
           [-update <source> <nameservice1, nameservice2, ...> <destination> [-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] -owner <owner> -group <group> -mode <mode>]
           [-rm <source>]
           [-ls <path>]
+          [-getDestination <path>]
           [-setQuota <path> -nsQuota <nsQuota> -ssQuota <quota in bytes or quota size string>]
           [-clrQuota <path>]
           [-safemode enter | leave | get]
@@ -446,6 +447,7 @@ Usage:
 | `-update` *source* *nameservices* *destination* | Update a mount table entry or create one if it does not exist. |
 | `-rm` *source* | Remove mount point of specified path. |
 | `-ls` *path* | List mount points under specified path. |
+| `-getDestination` *path* | Get the subcluster where a file is or should be created. |
 | `-setQuota` *path* `-nsQuota` *nsQuota* `-ssQuota` *ssQuota* | Set quota for specified path. See [HDFS Quotas Guide](./HdfsQuotaAdminGuide.html) for the quota detail. |
 | `-clrQuota` *path* | Clear quota of given mount point. See [HDFS Quotas Guide](./HdfsQuotaAdminGuide.html) for the quota detail. |
 | `-safemode` `enter` `leave` `get` | Manually set the Router entering or leaving safe mode. The option *get* will be used for verifying if the Router is in safe mode state. |


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 21/41: HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6e770ff428666f5bcd7dd25f2672558bf6b65426
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Wed Jan 2 10:49:00 2019 -0800

    HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui.
---
 .../federation/router/ConnectionNullException.java | 33 ++++++++++++++++++
 .../server/federation/router/RouterRpcClient.java  | 20 ++++++++---
 .../server/federation/FederationTestUtils.java     | 31 +++++++++++++++++
 .../router/TestRouterClientRejectOverload.java     | 40 ++++++++++++++++++++++
 4 files changed, 120 insertions(+), 4 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java
new file mode 100644
index 0000000..53de602
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+
+
+/**
+ * Exception when can not get a non-null connection.
+ */
+public class ConnectionNullException extends IOException {
+
+  private static final long serialVersionUID = 1L;
+
+  public ConnectionNullException(String msg) {
+    super(msg);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index a21e980..c4d3a20 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -270,7 +270,8 @@ public class RouterRpcClient {
     }
 
     if (connection == null) {
-      throw new IOException("Cannot get a connection to " + rpcAddress);
+      throw new ConnectionNullException("Cannot get a connection to "
+          + rpcAddress);
     }
     return connection;
   }
@@ -363,9 +364,9 @@ public class RouterRpcClient {
     Map<FederationNamenodeContext, IOException> ioes = new LinkedHashMap<>();
     for (FederationNamenodeContext namenode : namenodes) {
       ConnectionContext connection = null;
+      String nsId = namenode.getNameserviceId();
+      String rpcAddress = namenode.getRpcAddress();
       try {
-        String nsId = namenode.getNameserviceId();
-        String rpcAddress = namenode.getRpcAddress();
         connection = this.getConnection(ugi, nsId, rpcAddress, protocol);
         ProxyAndInfo<?> client = connection.getClient();
         final Object proxy = client.getProxy();
@@ -394,6 +395,16 @@ public class RouterRpcClient {
           }
           // RemoteException returned by NN
           throw (RemoteException) ioe;
+        } else if (ioe instanceof ConnectionNullException) {
+          if (this.rpcMonitor != null) {
+            this.rpcMonitor.proxyOpFailureCommunicate();
+          }
+          LOG.error("Get connection for {} {} error: {}", nsId, rpcAddress,
+              ioe.getMessage());
+          // Throw StandbyException so that client can retry
+          StandbyException se = new StandbyException(ioe.getMessage());
+          se.initCause(ioe);
+          throw se;
         } else {
           // Other communication error, this is a failure
           // Communication retries are handled by the retry policy
@@ -425,7 +436,8 @@ public class RouterRpcClient {
       String addr = namenode.getRpcAddress();
       IOException ioe = entry.getValue();
       if (ioe instanceof StandbyException) {
-        LOG.error("{} {} at {} is in Standby", nsId, nnId, addr);
+        LOG.error("{} {} at {} is in Standby: {}", nsId, nnId, addr,
+            ioe.getMessage());
       } else {
         LOG.error("{} {} at {} error: \"{}\"",
             nsId, nnId, addr, ioe.getMessage());
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
index 5095c6b..d92edac 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
@@ -52,6 +52,9 @@ import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
 import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
+import org.apache.hadoop.hdfs.server.federation.router.ConnectionManager;
+import org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient;
+import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
 import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
@@ -60,6 +63,7 @@ import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
 import org.apache.hadoop.hdfs.server.federation.store.records.RouterState;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.Whitebox;
 import org.mockito.invocation.InvocationOnMock;
@@ -343,4 +347,31 @@ public final class FederationTestUtils {
       }
     }, 100, timeout);
   }
+
+  /**
+   * Simulate that a RouterRpcServer, the ConnectionManager of its
+   * RouterRpcClient throws IOException when call getConnection. So the
+   * RouterRpcClient will get a null Connection.
+   * @param server RouterRpcServer
+   * @throws IOException
+   */
+  public static void simulateThrowExceptionRouterRpcServer(
+      final RouterRpcServer server) throws IOException {
+    RouterRpcClient rpcClient = server.getRPCClient();
+    ConnectionManager connectionManager =
+        new ConnectionManager(server.getConfig());
+    ConnectionManager spyConnectionManager = spy(connectionManager);
+    doAnswer(new Answer() {
+      @Override
+      public Object answer(InvocationOnMock invocation) throws Throwable {
+        LOG.info("Simulating connectionManager throw IOException {}",
+            invocation.getMock());
+        throw new IOException("Simulate connectionManager throw IOException");
+      }
+    }).when(spyConnectionManager).getConnection(
+        any(UserGroupInformation.class), any(String.class), any(Class.class));
+
+    Whitebox.setInternalState(rpcClient, "connectionManager",
+        spyConnectionManager);
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
index 3c51e13..0664159 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.federation.router;
 
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.simulateSlowNamenode;
+import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.simulateThrowExceptionRouterRpcServer;
 import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -240,4 +241,43 @@ public class TestRouterClientRejectOverload {
           num <= expOverloadMax);
     }
   }
+
+  @Test
+  public void testConnectionNullException() throws Exception {
+    setupCluster(false);
+
+    // Choose 1st router
+    RouterContext routerContext = cluster.getRouters().get(0);
+    Router router = routerContext.getRouter();
+    // This router will throw ConnectionNullException
+    simulateThrowExceptionRouterRpcServer(router.getRpcServer());
+
+    // Set dfs.client.failover.random.order false, to pick 1st router at first
+    Configuration conf = cluster.getRouterClientConf();
+    conf.setBoolean("dfs.client.failover.random.order", false);
+    // Client to access Router Cluster
+    DFSClient routerClient =
+        new DFSClient(new URI("hdfs://fed"), conf);
+
+    // Get router0 metrics
+    FederationRPCMetrics rpcMetrics0 = cluster.getRouters().get(0)
+        .getRouter().getRpcServer().getRPCMetrics();
+    // Get router1 metrics
+    FederationRPCMetrics rpcMetrics1 = cluster.getRouters().get(1)
+        .getRouter().getRpcServer().getRPCMetrics();
+
+    // Original failures
+    long originalRouter0Failures = rpcMetrics0.getProxyOpFailureCommunicate();
+    long originalRouter1Failures = rpcMetrics1.getProxyOpFailureCommunicate();
+
+    // RPC call must be successful
+    routerClient.getFileInfo("/");
+
+    // Router 0 failures will increase
+    assertEquals(originalRouter0Failures + 1,
+        rpcMetrics0.getProxyOpFailureCommunicate());
+    // Router 1 failures will not change
+    assertEquals(originalRouter1Failures,
+        rpcMetrics1.getProxyOpFailureCommunicate());
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 06/41: HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 30573af0579ac3db9b7332785403c8b980d6d396
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Wed Nov 7 07:37:02 2018 +0530

    HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.
---
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index f38205a..014e0d5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -36,7 +36,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <dependencies>
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 15/41: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f945456092884d51d5e7efe020193641399f3a29
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Wed Dec 5 11:44:38 2018 +0800

    HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
---
 .../federation/router/ConnectionManager.java       | 20 ++++----
 .../server/federation/router/ConnectionPool.java   | 14 +++++-
 .../server/federation/router/RBFConfigKeys.java    |  5 ++
 .../src/main/resources/hdfs-rbf-default.xml        |  8 ++++
 .../federation/router/TestConnectionManager.java   | 55 ++++++++++++++++++----
 5 files changed, 85 insertions(+), 17 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..74bbbb5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
       LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
     this.conf = config;
 
-    // Configure minimum and maximum connection pools
+    // Configure minimum, maximum and active connection pools
     this.maxSize = this.conf.getInt(
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+    this.minActiveRatio = this.conf.getFloat(
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
     // Map with the connections indexed by UGI and Namenode
     this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
         pool = this.pools.get(connectionId);
         if (pool == null) {
           pool = new ConnectionPool(
-              this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+              this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+              this.minActiveRatio, protocol);
           this.pools.put(connectionId, pool);
         }
       } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
       long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
       int total = pool.getNumConnections();
       int active = pool.getNumActiveConnections();
+      float poolMinActiveRatio = pool.getMinActiveRatio();
       if (timeSinceLastActive > connectionCleanupPeriodMs ||
-          active < MIN_ACTIVE_RATIO * total) {
+          active < poolMinActiveRatio * total) {
         // Remove and close 1 connection
         List<ConnectionContext> conns = pool.removeConnections(1);
         for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
           try {
             int total = pool.getNumConnections();
             int active = pool.getNumActiveConnections();
+            float poolMinActiveRatio = pool.getMinActiveRatio();
             if (pool.getNumConnections() < pool.getMaxSize() &&
-                active >= MIN_ACTIVE_RATIO * total) {
+                active >= poolMinActiveRatio * total) {
               ConnectionContext conn = pool.newConnection();
               pool.addConnection(conn);
             } else {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index fab3b81..f868521 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,6 +91,8 @@ public class ConnectionPool {
   private final int minSize;
   /** Max number of connections per user. */
   private final int maxSize;
+  /** Min ratio of active connections per user. */
+  private final float minActiveRatio;
 
   /** The last time a connection was active. */
   private volatile long lastActiveTime = 0;
@@ -98,7 +100,7 @@ public class ConnectionPool {
 
   protected ConnectionPool(Configuration config, String address,
       UserGroupInformation user, int minPoolSize, int maxPoolSize,
-      Class<?> proto) throws IOException {
+      float minActiveRatio, Class<?> proto) throws IOException {
 
     this.conf = config;
 
@@ -112,6 +114,7 @@ public class ConnectionPool {
     // Set configuration parameters for the pool
     this.minSize = minPoolSize;
     this.maxSize = maxPoolSize;
+    this.minActiveRatio = minActiveRatio;
 
     // Add minimum connections to the pool
     for (int i=0; i<this.minSize; i++) {
@@ -141,6 +144,15 @@ public class ConnectionPool {
   }
 
   /**
+   * Get the minimum ratio of active connections in this pool.
+   *
+   * @return Minimum ratio of active connections.
+   */
+  protected float getMinActiveRatio() {
+    return this.minActiveRatio;
+  }
+
+  /**
    * Get the connection pool identifier.
    *
    * @return Connection pool identifier.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 10018fe..0070de7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -102,6 +102,11 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "connection.creator.queue-size";
   public static final int
       DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT = 100;
+  public static final String
+      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO =
+      FEDERATION_ROUTER_PREFIX + "connection.min-active-ratio";
+  public static final float
+      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT = 0.5f;
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
       FEDERATION_ROUTER_PREFIX + "connection.pool-size";
   public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 09050bb..afb3c32 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -118,6 +118,14 @@
   </property>
 
   <property>
+    <name>dfs.federation.router.connection.min-active-ratio</name>
+    <value>0.5f</value>
+    <description>
+      Minimum active ratio of connections from the router to namenodes.
+    </description>
+  </property>
+
+  <property>
     <name>dfs.federation.router.connection.clean.ms</name>
     <value>10000</value>
     <description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
index 765f6c8..a06dd6a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
@@ -80,14 +80,14 @@ public class TestConnectionManager {
     Map<ConnectionPoolId, ConnectionPool> poolMap = connManager.getPools();
 
     ConnectionPool pool1 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool1, 9, 4);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
         pool1);
 
     ConnectionPool pool2 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool2, 10, 10);
     poolMap.put(
         new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -110,7 +110,7 @@ public class TestConnectionManager {
 
     // Make sure the number of connections doesn't go below minSize
     ConnectionPool pool3 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool3, 8, 0);
     poolMap.put(
         new ConnectionPoolId(TEST_USER3, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -134,7 +134,7 @@ public class TestConnectionManager {
   public void testConnectionCreatorWithException() throws Exception {
     // Create a bad connection pool pointing to unresolvable namenode address.
     ConnectionPool badPool = new ConnectionPool(
-            conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 0, 10,
+            conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f,
             ClientProtocol.class);
     BlockingQueue<ConnectionPool> queue = new ArrayBlockingQueue<>(1);
     queue.add(badPool);
@@ -160,7 +160,7 @@ public class TestConnectionManager {
 
     // Create a bad connection pool pointing to unresolvable namenode address.
     ConnectionPool badPool = new ConnectionPool(
-        conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 1, 10,
+        conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 1, 10, 0.5f,
         ClientProtocol.class);
   }
 
@@ -171,7 +171,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -196,7 +196,7 @@ public class TestConnectionManager {
   @Test
   public void testValidClientIndex() throws Exception {
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, 0.5f, ClientProtocol.class);
     for(int i = -3; i <= 3; i++) {
       pool.getClientIndex().set(i);
       ConnectionContext conn = pool.getConnection();
@@ -212,7 +212,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, NamenodeProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, NamenodeProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(
@@ -262,4 +262,43 @@ public class TestConnectionManager {
     }
   }
 
+  @Test
+  public void testConfigureConnectionActiveRatio() throws IOException {
+    final int totalConns = 10;
+    int activeConns = 7;
+
+    Configuration tmpConf = new Configuration();
+    // Set dfs.federation.router.connection.min-active-ratio 0.8f
+    tmpConf.setFloat(
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO, 0.8f);
+    ConnectionManager tmpConnManager = new ConnectionManager(tmpConf);
+    tmpConnManager.start();
+
+    // Create one new connection pool
+    tmpConnManager.getConnection(TEST_USER1, TEST_NN_ADDRESS,
+        NamenodeProtocol.class);
+
+    Map<ConnectionPoolId, ConnectionPool> poolMap = tmpConnManager.getPools();
+    ConnectionPoolId connectionPoolId = new ConnectionPoolId(TEST_USER1,
+        TEST_NN_ADDRESS, NamenodeProtocol.class);
+    ConnectionPool pool = poolMap.get(connectionPoolId);
+
+    // Test min active ratio is 0.8f
+    assertEquals(0.8f, pool.getMinActiveRatio(), 0.001f);
+
+    pool.getConnection().getClient();
+    // Test there is one active connection in pool
+    assertEquals(1, pool.getNumActiveConnections());
+
+    // Add other 6 active/9 total connections to pool
+    addConnectionsToPool(pool, totalConns - 1, activeConns - 1);
+
+    // There are 7 active connections.
+    // The active number is less than totalConns(10) * minActiveRatio(0.8f).
+    // We can cleanup the pool
+    tmpConnManager.cleanup(pool);
+    assertEquals(totalConns - 1, pool.getNumConnections());
+
+    tmpConnManager.close();
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 25/41: HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b6b8d14f99317c9a2eeb650db6651ed9e70f690a
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Tue Jan 15 14:21:33 2019 +0800

    HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.
---
 .../hdfs/server/federation/router/Quota.java       |  6 ++--
 .../federation/router/RouterClientProtocol.java    | 22 +++++++-------
 .../federation/router/RouterQuotaManager.java      |  2 +-
 .../router/RouterQuotaUpdateService.java           |  6 ++--
 .../server/federation/router/RouterQuotaUsage.java | 35 ++++++++++++----------
 5 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index 5d0309f..cfb538f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -163,7 +163,7 @@ public class Quota {
     long ssCount = 0;
     long nsQuota = HdfsConstants.QUOTA_RESET;
     long ssQuota = HdfsConstants.QUOTA_RESET;
-    boolean hasQuotaUnSet = false;
+    boolean hasQuotaUnset = false;
 
     for (Map.Entry<RemoteLocation, QuotaUsage> entry : results.entrySet()) {
       RemoteLocation loc = entry.getKey();
@@ -172,7 +172,7 @@ public class Quota {
         // If quota is not set in real FileSystem, the usage
         // value will return -1.
         if (usage.getQuota() == -1 && usage.getSpaceQuota() == -1) {
-          hasQuotaUnSet = true;
+          hasQuotaUnset = true;
         }
         nsQuota = usage.getQuota();
         ssQuota = usage.getSpaceQuota();
@@ -189,7 +189,7 @@ public class Quota {
 
     QuotaUsage.Builder builder = new QuotaUsage.Builder()
         .fileAndDirectoryCount(nsCount).spaceConsumed(ssCount);
-    if (hasQuotaUnSet) {
+    if (hasQuotaUnset) {
       builder.quota(HdfsConstants.QUOTA_RESET)
           .spaceQuota(HdfsConstants.QUOTA_RESET);
     } else {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 2089c57..c41959e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.hdfs.server.federation.router;
 import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.updateMountPointStatus;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
-import org.apache.hadoop.fs.BatchedRemoteIterator;
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
 import org.apache.hadoop.fs.CacheFlag;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
@@ -1141,7 +1141,7 @@ public class RouterClientProtocol implements ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry> listCacheDirectives(
+  public BatchedEntries<CacheDirectiveEntry> listCacheDirectives(
       long prevId, CacheDirectiveInfo filter) throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
     return null;
@@ -1163,7 +1163,7 @@ public class RouterClientProtocol implements ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries<CachePoolEntry> listCachePools(String prevKey)
+  public BatchedEntries<CachePoolEntry> listCachePools(String prevKey)
       throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
     return null;
@@ -1274,7 +1274,7 @@ public class RouterClientProtocol implements ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries<EncryptionZone> listEncryptionZones(long prevId)
+  public BatchedEntries<EncryptionZone> listEncryptionZones(long prevId)
       throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
     return null;
@@ -1287,7 +1287,7 @@ public class RouterClientProtocol implements ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(
+  public BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(
       long prevId) throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
     return null;
@@ -1523,15 +1523,17 @@ public class RouterClientProtocol implements ClientProtocol {
 
   @Deprecated
   @Override
-  public BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId)
+  public BatchedEntries<OpenFileEntry> listOpenFiles(long prevId)
       throws IOException {
-    return listOpenFiles(prevId, EnumSet.of(OpenFilesIterator.OpenFilesType.ALL_OPEN_FILES),
+    return listOpenFiles(prevId,
+        EnumSet.of(OpenFilesIterator.OpenFilesType.ALL_OPEN_FILES),
         OpenFilesIterator.FILTER_PATH_DEFAULT);
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId,
-      EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException {
+  public BatchedEntries<OpenFileEntry> listOpenFiles(long prevId,
+      EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path)
+          throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
     return null;
   }
@@ -1663,7 +1665,7 @@ public class RouterClientProtocol implements ClientProtocol {
     // Get the file info from everybody
     Map<RemoteLocation, HdfsFileStatus> results =
         rpcClient.invokeConcurrent(locations, method, HdfsFileStatus.class);
-    int children=0;
+    int children = 0;
     // We return the first file
     HdfsFileStatus dirStatus = null;
     for (RemoteLocation loc : locations) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
index fa2a6e4..e818f5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaManager.java
@@ -88,7 +88,7 @@ public class RouterQuotaManager {
   }
 
   /**
-   * Get children paths (can including itself) under specified federation path.
+   * Get children paths (can include itself) under specified federation path.
    * @param parentPath Federated path.
    * @return Set of children paths.
    */
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
index 9bfd705..dd21e1a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
@@ -186,10 +186,8 @@ public class RouterQuotaUpdateService extends PeriodicService {
    */
   private List<MountTable> getQuotaSetMountTables() throws IOException {
     List<MountTable> mountTables = getMountTableEntries();
-    Set<String> stalePaths = new HashSet<>();
-    for (String path : this.quotaManager.getAll()) {
-      stalePaths.add(path);
-    }
+    Set<String> allPaths = this.quotaManager.getAll();
+    Set<String> stalePaths = new HashSet<>(allPaths);
 
     List<MountTable> neededMountTables = new LinkedList<>();
     for (MountTable entry : mountTables) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUsage.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUsage.java
index de9119a..7fd845a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUsage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUsage.java
@@ -75,9 +75,10 @@ public final class RouterQuotaUsage extends QuotaUsage {
    * @throws NSQuotaExceededException If the quota is exceeded.
    */
   public void verifyNamespaceQuota() throws NSQuotaExceededException {
-    if (Quota.isViolated(getQuota(), getFileAndDirectoryCount())) {
-      throw new NSQuotaExceededException(getQuota(),
-          getFileAndDirectoryCount());
+    long quota = getQuota();
+    long fileAndDirectoryCount = getFileAndDirectoryCount();
+    if (Quota.isViolated(quota, fileAndDirectoryCount)) {
+      throw new NSQuotaExceededException(quota, fileAndDirectoryCount);
     }
   }
 
@@ -87,25 +88,29 @@ public final class RouterQuotaUsage extends QuotaUsage {
    * @throws DSQuotaExceededException If the quota is exceeded.
    */
   public void verifyStoragespaceQuota() throws DSQuotaExceededException {
-    if (Quota.isViolated(getSpaceQuota(), getSpaceConsumed())) {
-      throw new DSQuotaExceededException(getSpaceQuota(), getSpaceConsumed());
+    long spaceQuota = getSpaceQuota();
+    long spaceConsumed = getSpaceConsumed();
+    if (Quota.isViolated(spaceQuota, spaceConsumed)) {
+      throw new DSQuotaExceededException(spaceQuota, spaceConsumed);
     }
   }
 
   @Override
   public String toString() {
-    String nsQuota = String.valueOf(getQuota());
-    String nsCount = String.valueOf(getFileAndDirectoryCount());
-    if (getQuota() == HdfsConstants.QUOTA_RESET) {
-      nsQuota = "-";
-      nsCount = "-";
+    String nsQuota = "-";
+    String nsCount = "-";
+    long quota = getQuota();
+    if (quota != HdfsConstants.QUOTA_RESET) {
+      nsQuota = String.valueOf(quota);
+      nsCount = String.valueOf(getFileAndDirectoryCount());
     }
 
-    String ssQuota = StringUtils.byteDesc(getSpaceQuota());
-    String ssCount = StringUtils.byteDesc(getSpaceConsumed());
-    if (getSpaceQuota() == HdfsConstants.QUOTA_RESET) {
-      ssQuota = "-";
-      ssCount = "-";
+    String ssQuota = "-";
+    String ssCount = "-";
+    long spaceQuota = getSpaceQuota();
+    if (spaceQuota != HdfsConstants.QUOTA_RESET) {
+      ssQuota = StringUtils.byteDesc(spaceQuota);
+      ssCount = StringUtils.byteDesc(getSpaceConsumed());
     }
 
     StringBuilder str = new StringBuilder();


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 11/41: HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3f12355bb3a7a89901b966bcab0556a1d6bf9e23
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Thu Nov 22 08:26:22 2018 +0530

    HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar.
---
 .../hdfs/server/federation/router/NamenodeHeartbeatService.java     | 3 ++-
 .../java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java | 6 ------
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index 1349aa3..871ebaf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.hdfs.tools.DFSHAAdmin;
 import org.apache.hadoop.hdfs.tools.NNHAServiceTarget;
 import org.codehaus.jettison.json.JSONArray;
 import org.codehaus.jettison.json.JSONObject;
@@ -108,7 +109,7 @@ public class NamenodeHeartbeatService extends PeriodicService {
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
-    this.conf = configuration;
+    this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
     String nnDesc = nameserviceId;
     if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
index deb6ace..100313e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
@@ -14,8 +14,6 @@
 
 package org.apache.hadoop.fs.contract.router;
 
-import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
-import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_SERVICE_USER_NAME_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY;
@@ -109,10 +107,6 @@ public final class SecurityConfUtil {
     spnegoPrincipal =
         SPNEGO_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
 
-    // Set auth configuration for mini DFS
-    conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
-    conf.set(HADOOP_SECURITY_SERVICE_USER_NAME_KEY, routerPrincipal);
-
     // Setup principles and keytabs for dfs
     conf.set(DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
     conf.set(DFS_NAMENODE_KEYTAB_FILE_KEY, keytab);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 32/41: HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2f928256feb88488f1c1010da7a682ed71cb4253
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Mon Jan 28 09:03:32 2019 +0530

    HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.
---
 .../server/federation/router/RouterClientProtocol.java   |  7 +++++++
 .../federation/router/TestRouterRpcMultiDestination.java | 16 ++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 09f7e5f..485c103 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -1629,6 +1629,7 @@ public class RouterClientProtocol implements ClientProtocol {
     long quota = 0;
     long spaceConsumed = 0;
     long spaceQuota = 0;
+    String ecPolicy = "";
 
     for (ContentSummary summary : summaries) {
       length += summary.getLength();
@@ -1637,6 +1638,11 @@ public class RouterClientProtocol implements ClientProtocol {
       quota += summary.getQuota();
       spaceConsumed += summary.getSpaceConsumed();
       spaceQuota += summary.getSpaceQuota();
+      // We return from the first response as we assume that the EC policy
+      // of each sub-cluster is same.
+      if (ecPolicy.isEmpty()) {
+        ecPolicy = summary.getErasureCodingPolicy();
+      }
     }
 
     ContentSummary ret = new ContentSummary.Builder()
@@ -1646,6 +1652,7 @@ public class RouterClientProtocol implements ClientProtocol {
         .quota(quota)
         .spaceConsumed(spaceConsumed)
         .spaceQuota(spaceQuota)
+        .erasureCodingPolicy(ecPolicy)
         .build();
     return ret;
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
index 3101748..3d941bb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
@@ -41,6 +41,7 @@ import java.util.TreeSet;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@@ -230,6 +231,21 @@ public class TestRouterRpcMultiDestination extends TestRouterRpc {
   }
 
   @Test
+  public void testGetContentSummaryEc() throws Exception {
+    DistributedFileSystem routerDFS =
+        (DistributedFileSystem) getRouterFileSystem();
+    Path dir = new Path("/");
+    String expectedECPolicy = "RS-6-3-1024k";
+    try {
+      routerDFS.setErasureCodingPolicy(dir, expectedECPolicy);
+      assertEquals(expectedECPolicy,
+          routerDFS.getContentSummary(dir).getErasureCodingPolicy());
+    } finally {
+      routerDFS.unsetErasureCodingPolicy(dir);
+    }
+  }
+
+  @Test
   public void testSubclusterDown() throws Exception {
     final int totalFiles = 6;
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 14/41: Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f659c2784a7d8b4c423756f42f2e3505d0ba83ea
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Tue Dec 4 22:16:00 2018 +0800

    Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."
    
    This reverts commit 7c0d6f65fde12ead91ed7c706521ad1d3dc995f8.
---
 .../federation/router/ConnectionManager.java       | 20 ++++-----
 .../server/federation/router/ConnectionPool.java   | 14 +-----
 .../server/federation/router/RBFConfigKeys.java    |  5 ---
 .../src/main/resources/hdfs-rbf-default.xml        |  8 ----
 .../federation/router/TestConnectionManager.java   | 51 +++-------------------
 5 files changed, 15 insertions(+), 83 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 74bbbb5..fa2bf94 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,6 +49,10 @@ public class ConnectionManager {
   private static final Logger LOG =
       LoggerFactory.getLogger(ConnectionManager.class);
 
+  /** Minimum amount of active connections: 50%. */
+  protected static final float MIN_ACTIVE_RATIO = 0.5f;
+
+
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -56,8 +60,6 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
-  /** Min ratio of active connections per user + nn. */
-  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -94,13 +96,10 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
     this.conf = config;
 
-    // Configure minimum, maximum and active connection pools
+    // Configure minimum and maximum connection pools
     this.maxSize = this.conf.getInt(
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
-    this.minActiveRatio = this.conf.getFloat(
-        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
-        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
     // Map with the connections indexed by UGI and Namenode
     this.pools = new HashMap<>();
@@ -204,8 +203,7 @@ public class ConnectionManager {
         pool = this.pools.get(connectionId);
         if (pool == null) {
           pool = new ConnectionPool(
-              this.conf, nnAddress, ugi, this.minSize, this.maxSize,
-              this.minActiveRatio, protocol);
+              this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
           this.pools.put(connectionId, pool);
         }
       } finally {
@@ -328,9 +326,8 @@ public class ConnectionManager {
       long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
       int total = pool.getNumConnections();
       int active = pool.getNumActiveConnections();
-      float poolMinActiveRatio = pool.getMinActiveRatio();
       if (timeSinceLastActive > connectionCleanupPeriodMs ||
-          active < poolMinActiveRatio * total) {
+          active < MIN_ACTIVE_RATIO * total) {
         // Remove and close 1 connection
         List<ConnectionContext> conns = pool.removeConnections(1);
         for (ConnectionContext conn : conns) {
@@ -415,9 +412,8 @@ public class ConnectionManager {
           try {
             int total = pool.getNumConnections();
             int active = pool.getNumActiveConnections();
-            float poolMinActiveRatio = pool.getMinActiveRatio();
             if (pool.getNumConnections() < pool.getMaxSize() &&
-                active >= poolMinActiveRatio * total) {
+                active >= MIN_ACTIVE_RATIO * total) {
               ConnectionContext conn = pool.newConnection();
               pool.addConnection(conn);
             } else {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index f868521..fab3b81 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,8 +91,6 @@ public class ConnectionPool {
   private final int minSize;
   /** Max number of connections per user. */
   private final int maxSize;
-  /** Min ratio of active connections per user. */
-  private final float minActiveRatio;
 
   /** The last time a connection was active. */
   private volatile long lastActiveTime = 0;
@@ -100,7 +98,7 @@ public class ConnectionPool {
 
   protected ConnectionPool(Configuration config, String address,
       UserGroupInformation user, int minPoolSize, int maxPoolSize,
-      float minActiveRatio, Class<?> proto) throws IOException {
+      Class<?> proto) throws IOException {
 
     this.conf = config;
 
@@ -114,7 +112,6 @@ public class ConnectionPool {
     // Set configuration parameters for the pool
     this.minSize = minPoolSize;
     this.maxSize = maxPoolSize;
-    this.minActiveRatio = minActiveRatio;
 
     // Add minimum connections to the pool
     for (int i=0; i<this.minSize; i++) {
@@ -144,15 +141,6 @@ public class ConnectionPool {
   }
 
   /**
-   * Get the minimum ratio of active connections in this pool.
-   *
-   * @return Minimum ratio of active connections.
-   */
-  protected float getMinActiveRatio() {
-    return this.minActiveRatio;
-  }
-
-  /**
    * Get the connection pool identifier.
    *
    * @return Connection pool identifier.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 0070de7..10018fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -102,11 +102,6 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "connection.creator.queue-size";
   public static final int
       DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT = 100;
-  public static final String
-      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO =
-      FEDERATION_ROUTER_PREFIX + "connection.min-active-ratio";
-  public static final float
-      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT = 0.5f;
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
       FEDERATION_ROUTER_PREFIX + "connection.pool-size";
   public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index afb3c32..09050bb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -118,14 +118,6 @@
   </property>
 
   <property>
-    <name>dfs.federation.router.connection.min-active-ratio</name>
-    <value>0.5f</value>
-    <description>
-      Minimum active ratio of connections from the router to namenodes.
-    </description>
-  </property>
-
-  <property>
     <name>dfs.federation.router.connection.clean.ms</name>
     <value>10000</value>
     <description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
index 6c1e448..765f6c8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
@@ -80,14 +80,14 @@ public class TestConnectionManager {
     Map<ConnectionPoolId, ConnectionPool> poolMap = connManager.getPools();
 
     ConnectionPool pool1 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
     addConnectionsToPool(pool1, 9, 4);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
         pool1);
 
     ConnectionPool pool2 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, 0.5f, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, ClientProtocol.class);
     addConnectionsToPool(pool2, 10, 10);
     poolMap.put(
         new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -110,7 +110,7 @@ public class TestConnectionManager {
 
     // Make sure the number of connections doesn't go below minSize
     ConnectionPool pool3 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, 0.5f, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, ClientProtocol.class);
     addConnectionsToPool(pool3, 8, 0);
     poolMap.put(
         new ConnectionPoolId(TEST_USER3, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -171,7 +171,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -196,7 +196,7 @@ public class TestConnectionManager {
   @Test
   public void testValidClientIndex() throws Exception {
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, 0.5f, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, ClientProtocol.class);
     for(int i = -3; i <= 3; i++) {
       pool.getClientIndex().set(i);
       ConnectionContext conn = pool.getConnection();
@@ -212,7 +212,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, NamenodeProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, NamenodeProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(
@@ -262,43 +262,4 @@ public class TestConnectionManager {
     }
   }
 
-  @Test
-  public void testConfigureConnectionActiveRatio() throws IOException {
-    final int totalConns = 10;
-    int activeConns = 7;
-
-    Configuration tmpConf = new Configuration();
-    // Set dfs.federation.router.connection.min-active-ratio 0.8f
-    tmpConf.setFloat(
-        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO, 0.8f);
-    ConnectionManager tmpConnManager = new ConnectionManager(tmpConf);
-    tmpConnManager.start();
-
-    // Create one new connection pool
-    tmpConnManager.getConnection(TEST_USER1, TEST_NN_ADDRESS,
-        NamenodeProtocol.class);
-
-    Map<ConnectionPoolId, ConnectionPool> poolMap = tmpConnManager.getPools();
-    ConnectionPoolId connectionPoolId = new ConnectionPoolId(TEST_USER1,
-        TEST_NN_ADDRESS, NamenodeProtocol.class);
-    ConnectionPool pool = poolMap.get(connectionPoolId);
-
-    // Test min active ratio is 0.8f
-    assertEquals(0.8f, pool.getMinActiveRatio(), 0.001f);
-
-    pool.getConnection().getClient();
-    // Test there is one active connection in pool
-    assertEquals(1, pool.getNumActiveConnections());
-
-    // Add other 6 active/9 total connections to pool
-    addConnectionsToPool(pool, totalConns - 1, activeConns - 1);
-
-    // There are 7 active connections.
-    // The active number is less than totalConns(10) * minActiveRatio(0.8f).
-    // We can cleanup the pool
-    tmpConnManager.cleanup(pool);
-    assertEquals(totalConns - 1, pool.getNumConnections());
-
-    tmpConnManager.close();
-  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 13/41: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0ffeac3ae0c37acd1679ab91335a0746081b5cb7
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Tue Dec 4 19:58:38 2018 +0800

    HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.
---
 .../federation/router/ConnectionManager.java       | 20 +++++----
 .../server/federation/router/ConnectionPool.java   | 14 +++++-
 .../server/federation/router/RBFConfigKeys.java    |  5 +++
 .../src/main/resources/hdfs-rbf-default.xml        |  8 ++++
 .../federation/router/TestConnectionManager.java   | 51 +++++++++++++++++++---
 5 files changed, 83 insertions(+), 15 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..74bbbb5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
       LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
     this.conf = config;
 
-    // Configure minimum and maximum connection pools
+    // Configure minimum, maximum and active connection pools
     this.maxSize = this.conf.getInt(
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
         RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+    this.minActiveRatio = this.conf.getFloat(
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
     // Map with the connections indexed by UGI and Namenode
     this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
         pool = this.pools.get(connectionId);
         if (pool == null) {
           pool = new ConnectionPool(
-              this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+              this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+              this.minActiveRatio, protocol);
           this.pools.put(connectionId, pool);
         }
       } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
       long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
       int total = pool.getNumConnections();
       int active = pool.getNumActiveConnections();
+      float poolMinActiveRatio = pool.getMinActiveRatio();
       if (timeSinceLastActive > connectionCleanupPeriodMs ||
-          active < MIN_ACTIVE_RATIO * total) {
+          active < poolMinActiveRatio * total) {
         // Remove and close 1 connection
         List<ConnectionContext> conns = pool.removeConnections(1);
         for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
           try {
             int total = pool.getNumConnections();
             int active = pool.getNumActiveConnections();
+            float poolMinActiveRatio = pool.getMinActiveRatio();
             if (pool.getNumConnections() < pool.getMaxSize() &&
-                active >= MIN_ACTIVE_RATIO * total) {
+                active >= poolMinActiveRatio * total) {
               ConnectionContext conn = pool.newConnection();
               pool.addConnection(conn);
             } else {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index fab3b81..f868521 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,6 +91,8 @@ public class ConnectionPool {
   private final int minSize;
   /** Max number of connections per user. */
   private final int maxSize;
+  /** Min ratio of active connections per user. */
+  private final float minActiveRatio;
 
   /** The last time a connection was active. */
   private volatile long lastActiveTime = 0;
@@ -98,7 +100,7 @@ public class ConnectionPool {
 
   protected ConnectionPool(Configuration config, String address,
       UserGroupInformation user, int minPoolSize, int maxPoolSize,
-      Class<?> proto) throws IOException {
+      float minActiveRatio, Class<?> proto) throws IOException {
 
     this.conf = config;
 
@@ -112,6 +114,7 @@ public class ConnectionPool {
     // Set configuration parameters for the pool
     this.minSize = minPoolSize;
     this.maxSize = maxPoolSize;
+    this.minActiveRatio = minActiveRatio;
 
     // Add minimum connections to the pool
     for (int i=0; i<this.minSize; i++) {
@@ -141,6 +144,15 @@ public class ConnectionPool {
   }
 
   /**
+   * Get the minimum ratio of active connections in this pool.
+   *
+   * @return Minimum ratio of active connections.
+   */
+  protected float getMinActiveRatio() {
+    return this.minActiveRatio;
+  }
+
+  /**
    * Get the connection pool identifier.
    *
    * @return Connection pool identifier.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 10018fe..0070de7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -102,6 +102,11 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "connection.creator.queue-size";
   public static final int
       DFS_ROUTER_NAMENODE_CONNECTION_CREATOR_QUEUE_SIZE_DEFAULT = 100;
+  public static final String
+      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO =
+      FEDERATION_ROUTER_PREFIX + "connection.min-active-ratio";
+  public static final float
+      DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT = 0.5f;
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
       FEDERATION_ROUTER_PREFIX + "connection.pool-size";
   public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 09050bb..afb3c32 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -118,6 +118,14 @@
   </property>
 
   <property>
+    <name>dfs.federation.router.connection.min-active-ratio</name>
+    <value>0.5f</value>
+    <description>
+      Minimum active ratio of connections from the router to namenodes.
+    </description>
+  </property>
+
+  <property>
     <name>dfs.federation.router.connection.clean.ms</name>
     <value>10000</value>
     <description>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
index 765f6c8..6c1e448 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
@@ -80,14 +80,14 @@ public class TestConnectionManager {
     Map<ConnectionPoolId, ConnectionPool> poolMap = connManager.getPools();
 
     ConnectionPool pool1 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool1, 9, 4);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
         pool1);
 
     ConnectionPool pool2 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER2, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool2, 10, 10);
     poolMap.put(
         new ConnectionPoolId(TEST_USER2, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -110,7 +110,7 @@ public class TestConnectionManager {
 
     // Make sure the number of connections doesn't go below minSize
     ConnectionPool pool3 = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER3, 2, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool3, 8, 0);
     poolMap.put(
         new ConnectionPoolId(TEST_USER3, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -171,7 +171,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, ClientProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(TEST_USER1, TEST_NN_ADDRESS, ClientProtocol.class),
@@ -196,7 +196,7 @@ public class TestConnectionManager {
   @Test
   public void testValidClientIndex() throws Exception {
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, ClientProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 2, 2, 0.5f, ClientProtocol.class);
     for(int i = -3; i <= 3; i++) {
       pool.getClientIndex().set(i);
       ConnectionContext conn = pool.getConnection();
@@ -212,7 +212,7 @@ public class TestConnectionManager {
     int activeConns = 5;
 
     ConnectionPool pool = new ConnectionPool(
-        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, NamenodeProtocol.class);
+        conf, TEST_NN_ADDRESS, TEST_USER1, 0, 10, 0.5f, NamenodeProtocol.class);
     addConnectionsToPool(pool, totalConns, activeConns);
     poolMap.put(
         new ConnectionPoolId(
@@ -262,4 +262,43 @@ public class TestConnectionManager {
     }
   }
 
+  @Test
+  public void testConfigureConnectionActiveRatio() throws IOException {
+    final int totalConns = 10;
+    int activeConns = 7;
+
+    Configuration tmpConf = new Configuration();
+    // Set dfs.federation.router.connection.min-active-ratio 0.8f
+    tmpConf.setFloat(
+        RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO, 0.8f);
+    ConnectionManager tmpConnManager = new ConnectionManager(tmpConf);
+    tmpConnManager.start();
+
+    // Create one new connection pool
+    tmpConnManager.getConnection(TEST_USER1, TEST_NN_ADDRESS,
+        NamenodeProtocol.class);
+
+    Map<ConnectionPoolId, ConnectionPool> poolMap = tmpConnManager.getPools();
+    ConnectionPoolId connectionPoolId = new ConnectionPoolId(TEST_USER1,
+        TEST_NN_ADDRESS, NamenodeProtocol.class);
+    ConnectionPool pool = poolMap.get(connectionPoolId);
+
+    // Test min active ratio is 0.8f
+    assertEquals(0.8f, pool.getMinActiveRatio(), 0.001f);
+
+    pool.getConnection().getClient();
+    // Test there is one active connection in pool
+    assertEquals(1, pool.getNumActiveConnections());
+
+    // Add other 6 active/9 total connections to pool
+    addConnectionsToPool(pool, totalConns - 1, activeConns - 1);
+
+    // There are 7 active connections.
+    // The active number is less than totalConns(10) * minActiveRatio(0.8f).
+    // We can cleanup the pool
+    tmpConnManager.cleanup(pool);
+    assertEquals(totalConns - 1, pool.getNumConnections());
+
+    tmpConnManager.close();
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 40/41: HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 22d23ded7e934faf7d95e34d010753ec94500242
Author: Giovanni Matteo Fumarola <gi...@apache.org>
AuthorDate: Fri Feb 15 10:47:17 2019 -0800

    HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri.
---
 .../hadoop/hdfs/protocol/ECBlockGroupStats.java    | 71 ++++++++++++++++++++++
 .../server/federation/router/ErasureCoding.java    | 29 +--------
 .../server/federation/router/RouterRpcClient.java  | 19 ++----
 .../server/federation/router/TestRouterRpc.java    | 48 +++++++++++----
 4 files changed, 114 insertions(+), 53 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
index 3dde604..1ead5c1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
@@ -17,6 +17,10 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
+import java.util.Collection;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
@@ -103,4 +107,71 @@ public final class ECBlockGroupStats {
     statsBuilder.append("]");
     return statsBuilder.toString();
   }
+
+  @Override
+  public int hashCode() {
+    return new HashCodeBuilder()
+        .append(lowRedundancyBlockGroups)
+        .append(corruptBlockGroups)
+        .append(missingBlockGroups)
+        .append(bytesInFutureBlockGroups)
+        .append(pendingDeletionBlocks)
+        .append(highestPriorityLowRedundancyBlocks)
+        .toHashCode();
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+    ECBlockGroupStats other = (ECBlockGroupStats)o;
+    return new EqualsBuilder()
+        .append(lowRedundancyBlockGroups, other.lowRedundancyBlockGroups)
+        .append(corruptBlockGroups, other.corruptBlockGroups)
+        .append(missingBlockGroups, other.missingBlockGroups)
+        .append(bytesInFutureBlockGroups, other.bytesInFutureBlockGroups)
+        .append(pendingDeletionBlocks, other.pendingDeletionBlocks)
+        .append(highestPriorityLowRedundancyBlocks,
+            other.highestPriorityLowRedundancyBlocks)
+        .isEquals();
+  }
+
+  /**
+   * Merge the multiple ECBlockGroupStats.
+   * @param stats Collection of stats to merge.
+   * @return A new ECBlockGroupStats merging all the input ones
+   */
+  public static ECBlockGroupStats merge(Collection<ECBlockGroupStats> stats) {
+    long lowRedundancyBlockGroups = 0;
+    long corruptBlockGroups = 0;
+    long missingBlockGroups = 0;
+    long bytesInFutureBlockGroups = 0;
+    long pendingDeletionBlocks = 0;
+    long highestPriorityLowRedundancyBlocks = 0;
+    boolean hasHighestPriorityLowRedundancyBlocks = false;
+
+    for (ECBlockGroupStats stat : stats) {
+      lowRedundancyBlockGroups += stat.getLowRedundancyBlockGroups();
+      corruptBlockGroups += stat.getCorruptBlockGroups();
+      missingBlockGroups += stat.getMissingBlockGroups();
+      bytesInFutureBlockGroups += stat.getBytesInFutureBlockGroups();
+      pendingDeletionBlocks += stat.getPendingDeletionBlocks();
+      if (stat.hasHighestPriorityLowRedundancyBlocks()) {
+        hasHighestPriorityLowRedundancyBlocks = true;
+        highestPriorityLowRedundancyBlocks +=
+            stat.getHighestPriorityLowRedundancyBlocks();
+      }
+    }
+    if (hasHighestPriorityLowRedundancyBlocks) {
+      return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
+          missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks,
+          highestPriorityLowRedundancyBlocks);
+    }
+    return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
+        missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks);
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
index f4584b1..97c5f6a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
@@ -187,33 +187,6 @@ public class ErasureCoding {
         rpcClient.invokeConcurrent(
             nss, method, true, false, ECBlockGroupStats.class);
 
-    // Merge the stats from all the namespaces
-    long lowRedundancyBlockGroups = 0;
-    long corruptBlockGroups = 0;
-    long missingBlockGroups = 0;
-    long bytesInFutureBlockGroups = 0;
-    long pendingDeletionBlocks = 0;
-    long highestPriorityLowRedundancyBlocks = 0;
-    boolean hasHighestPriorityLowRedundancyBlocks = false;
-
-    for (ECBlockGroupStats stats : allStats.values()) {
-      lowRedundancyBlockGroups += stats.getLowRedundancyBlockGroups();
-      corruptBlockGroups += stats.getCorruptBlockGroups();
-      missingBlockGroups += stats.getMissingBlockGroups();
-      bytesInFutureBlockGroups += stats.getBytesInFutureBlockGroups();
-      pendingDeletionBlocks += stats.getPendingDeletionBlocks();
-      if (stats.hasHighestPriorityLowRedundancyBlocks()) {
-        hasHighestPriorityLowRedundancyBlocks = true;
-        highestPriorityLowRedundancyBlocks +=
-            stats.getHighestPriorityLowRedundancyBlocks();
-      }
-    }
-    if (hasHighestPriorityLowRedundancyBlocks) {
-      return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
-          missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks,
-          highestPriorityLowRedundancyBlocks);
-    }
-    return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
-        missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks);
+    return ECBlockGroupStats.merge(allStats.values());
   }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index d21bde3..3d80c41 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -24,16 +24,15 @@ import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
 import java.net.InetSocketAddress;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
-import java.util.HashSet;
 import java.util.LinkedHashMap;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
-import java.util.Set;
 import java.util.TreeMap;
 import java.util.concurrent.ArrayBlockingQueue;
 import java.util.concurrent.BlockingQueue;
@@ -1061,8 +1060,8 @@ public class RouterRpcClient {
       }
     }
 
-    List<T> orderedLocations = new LinkedList<>();
-    Set<Callable<Object>> callables = new HashSet<>();
+    List<T> orderedLocations = new ArrayList<>();
+    List<Callable<Object>> callables = new ArrayList<>();
     for (final T location : locations) {
       String nsId = location.getNameserviceId();
       final List<? extends FederationNamenodeContext> namenodes =
@@ -1080,20 +1079,12 @@ public class RouterRpcClient {
             nnLocation = (T)new RemoteLocation(nsId, nnId, location.getDest());
           }
           orderedLocations.add(nnLocation);
-          callables.add(new Callable<Object>() {
-            public Object call() throws Exception {
-              return invokeMethod(ugi, nnList, proto, m, paramList);
-            }
-          });
+          callables.add(() -> invokeMethod(ugi, nnList, proto, m, paramList));
         }
       } else {
         // Call the objectGetter in order of nameservices in the NS list
         orderedLocations.add(location);
-        callables.add(new Callable<Object>() {
-          public Object call() throws Exception {
-            return invokeMethod(ugi, namenodes, proto, m, paramList);
-          }
-        });
+        callables.add(() ->  invokeMethod(ugi, namenodes, proto, m, paramList));
       }
     }
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index 2d26e11..d943076 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -37,6 +37,7 @@ import static org.junit.Assert.fail;
 import java.io.IOException;
 import java.lang.reflect.Method;
 import java.net.URISyntaxException;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Comparator;
 import java.util.EnumSet;
@@ -47,6 +48,7 @@ import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Random;
 import java.util.Set;
+import java.util.TreeMap;
 import java.util.TreeSet;
 import java.util.concurrent.TimeUnit;
 
@@ -120,6 +122,11 @@ public class TestRouterRpc {
   private static final Logger LOG =
       LoggerFactory.getLogger(TestRouterRpc.class);
 
+  private static final int NUM_SUBCLUSTERS = 2;
+  // We need at least 6 DNs to test Erasure Coding with RS-6-3-64k
+  private static final int NUM_DNS = 6;
+
+
   private static final Comparator<ErasureCodingPolicyInfo> EC_POLICY_CMP =
       new Comparator<ErasureCodingPolicyInfo>() {
         public int compare(
@@ -165,9 +172,9 @@ public class TestRouterRpc {
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
-    cluster = new MiniRouterDFSCluster(false, 2);
-    // We need 6 DNs to test Erasure Coding with RS-6-3-64k
-    cluster.setNumDatanodesPerNameservice(6);
+    cluster = new MiniRouterDFSCluster(false, NUM_SUBCLUSTERS);
+    cluster.setNumDatanodesPerNameservice(NUM_DNS);
+    cluster.setIndependentDNs();
 
     // Start NNs and DNs and wait until ready
     cluster.startCluster();
@@ -586,8 +593,13 @@ public class TestRouterRpc {
 
     DatanodeInfo[] combinedData =
         routerProtocol.getDatanodeReport(DatanodeReportType.ALL);
+    final Map<Integer, String> routerDNMap = new TreeMap<>();
+    for (DatanodeInfo dn : combinedData) {
+      String subcluster = dn.getNetworkLocation().split("/")[1];
+      routerDNMap.put(dn.getXferPort(), subcluster);
+    }
 
-    Set<Integer> individualData = new HashSet<Integer>();
+    final Map<Integer, String> nnDNMap = new TreeMap<>();
     for (String nameservice : cluster.getNameservices()) {
       NamenodeContext n = cluster.getNamenode(nameservice, null);
       DFSClient client = n.getClient();
@@ -597,10 +609,10 @@ public class TestRouterRpc {
       for (int i = 0; i < data.length; i++) {
         // Collect unique DNs based on their xfer port
         DatanodeInfo info = data[i];
-        individualData.add(info.getXferPort());
+        nnDNMap.put(info.getXferPort(), nameservice);
       }
     }
-    assertEquals(combinedData.length, individualData.size());
+    assertEquals(nnDNMap, routerDNMap);
   }
 
   @Test
@@ -1234,7 +1246,7 @@ public class TestRouterRpc {
   }
 
   @Test
-  public void testErasureCoding() throws IOException {
+  public void testErasureCoding() throws Exception {
 
     LOG.info("List the available erasurce coding policies");
     ErasureCodingPolicyInfo[] policies = checkErasureCodingPolicies();
@@ -1340,8 +1352,22 @@ public class TestRouterRpc {
 
     LOG.info("Check the stats");
     ECBlockGroupStats statsRouter = routerProtocol.getECBlockGroupStats();
-    ECBlockGroupStats statsNamenode = nnProtocol.getECBlockGroupStats();
-    assertEquals(statsNamenode.toString(), statsRouter.toString());
+    ECBlockGroupStats statsNamenode = getNamenodeECBlockGroupStats();
+    assertEquals(statsNamenode, statsRouter);
+  }
+
+  /**
+   * Get the EC stats from all namenodes and aggregate them.
+   * @return Aggregated EC stats from all namenodes.
+   * @throws Exception If we cannot get the stats.
+   */
+  private ECBlockGroupStats getNamenodeECBlockGroupStats() throws Exception {
+    List<ECBlockGroupStats> nnStats = new ArrayList<>();
+    for (NamenodeContext nnContext : cluster.getNamenodes()) {
+      ClientProtocol cp = nnContext.getClient().getNamenode();
+      nnStats.add(cp.getECBlockGroupStats());
+    }
+    return ECBlockGroupStats.merge(nnStats);
   }
 
   @Test
@@ -1375,9 +1401,9 @@ public class TestRouterRpc {
         router.getRouter().getNamenodeMetrics();
     final String jsonString0 = metrics.getLiveNodes();
 
-    // We should have 12 nodes in total
+    // We should have the nodes in all the subclusters
     JSONObject jsonObject = new JSONObject(jsonString0);
-    assertEquals(12, jsonObject.names().length());
+    assertEquals(NUM_SUBCLUSTERS * NUM_DNS, jsonObject.names().length());
 
     // We should be caching this information
     String jsonString1 = metrics.getLiveNodes();


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 31/41: HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f40da42ea732cb18e3c50c539dece7168330d6f4
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Fri Jan 25 11:28:48 2019 +0530

    HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.
---
 .../hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml            | 3 ++-
 .../hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md          | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 20ae778..afe3ad1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -275,7 +275,8 @@
     <name>dfs.federation.router.file.resolver.client.class</name>
     <value>org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver</value>
     <description>
-      Class to resolve files to subclusters.
+      Class to resolve files to subclusters. To enable multiple subclusters for a mount point,
+      set to org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
     </description>
   </property>
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index bcf8fa9..2ae0c2b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -404,7 +404,7 @@ Forwarding client requests to the right subcluster.
 
 | Property | Default | Description|
 |:---- |:---- |:---- |
-| dfs.federation.router.file.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class to resolve files to subclusters. |
+| dfs.federation.router.file.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class to resolve files to subclusters. To enable multiple subclusters for a mount point, set to org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver. |
 | dfs.federation.router.namenode.resolver.client.class | `org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver` | Class to resolve the namenode for a subcluster. |
 
 ### Namenode monitoring


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 05/41: HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0367198276d273ea21923544b382bb531da70a14
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Wed Nov 7 07:33:37 2018 +0530

    HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri.
---
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml        |  10 ++
 .../server/federation/router/RBFConfigKeys.java    |  11 ++
 .../hdfs/server/federation/router/Router.java      |  28 ++++
 .../federation/router/RouterAdminServer.java       |   7 +
 .../server/federation/router/RouterHttpServer.java |   5 +-
 .../server/federation/router/RouterRpcClient.java  |   9 +-
 .../server/federation/router/RouterRpcServer.java  |  12 ++
 .../src/main/resources/hdfs-rbf-default.xml        |  47 +++++++
 .../fs/contract/router/RouterHDFSContract.java     |   9 +-
 .../fs/contract/router/SecurityConfUtil.java       | 156 +++++++++++++++++++++
 .../router/TestRouterHDFSContractAppendSecure.java |  46 ++++++
 .../router/TestRouterHDFSContractConcatSecure.java |  51 +++++++
 .../router/TestRouterHDFSContractCreateSecure.java |  48 +++++++
 .../router/TestRouterHDFSContractDeleteSecure.java |  46 ++++++
 .../TestRouterHDFSContractGetFileStatusSecure.java |  47 +++++++
 .../router/TestRouterHDFSContractMkdirSecure.java  |  48 +++++++
 .../router/TestRouterHDFSContractOpenSecure.java   |  47 +++++++
 .../router/TestRouterHDFSContractRenameSecure.java |  48 +++++++
 .../TestRouterHDFSContractRootDirectorySecure.java |  63 +++++++++
 .../router/TestRouterHDFSContractSeekSecure.java   |  48 +++++++
 .../TestRouterHDFSContractSetTimesSecure.java      |  48 +++++++
 .../server/federation/MiniRouterDFSCluster.java    |  58 +++++++-
 22 files changed, 879 insertions(+), 13 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index 6886f00..f38205a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -35,6 +35,16 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
 
   <dependencies>
     <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcprov-jdk16</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-minikdc</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-common</artifactId>
       <scope>provided</scope>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index bbd4250..fa474f4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -242,4 +242,15 @@ public class RBFConfigKeys extends CommonConfigurationKeysPublic {
       FEDERATION_ROUTER_PREFIX + "quota-cache.update.interval";
   public static final long DFS_ROUTER_QUOTA_CACHE_UPATE_INTERVAL_DEFAULT =
       60000;
+
+  // HDFS Router security
+  public static final String DFS_ROUTER_KEYTAB_FILE_KEY =
+      FEDERATION_ROUTER_PREFIX + "keytab.file";
+  public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_KEY =
+      FEDERATION_ROUTER_PREFIX + "kerberos.principal";
+  public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY =
+      FEDERATION_ROUTER_PREFIX + "kerberos.principal.hostname";
+
+  public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY =
+      FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 5ddc129..3288273 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -17,6 +17,10 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
+
 import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newActiveNamenodeResolver;
 import static org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newFileSubclusterResolver;
 
@@ -41,6 +45,8 @@ import org.apache.hadoop.hdfs.server.federation.store.RouterStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.source.JvmMetrics;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.util.JvmPauseMonitor;
 import org.apache.hadoop.util.Time;
@@ -145,6 +151,11 @@ public class Router extends CompositeService {
     this.conf = configuration;
     updateRouterState(RouterServiceState.INITIALIZING);
 
+    // Enable the security for the Router
+    UserGroupInformation.setConfiguration(conf);
+    SecurityUtil.login(conf, DFS_ROUTER_KEYTAB_FILE_KEY,
+        DFS_ROUTER_KERBEROS_PRINCIPAL_KEY, getHostName(conf));
+
     if (conf.getBoolean(
         RBFConfigKeys.DFS_ROUTER_STORE_ENABLE,
         RBFConfigKeys.DFS_ROUTER_STORE_ENABLE_DEFAULT)) {
@@ -246,6 +257,23 @@ public class Router extends CompositeService {
     super.serviceInit(conf);
   }
 
+  /**
+   * Returns the hostname for this Router. If the hostname is not
+   * explicitly configured in the given config, then it is determined.
+   *
+   * @param config configuration
+   * @return the hostname (NB: may not be a FQDN)
+   * @throws UnknownHostException if the hostname cannot be determined
+   */
+  private static String getHostName(Configuration config)
+      throws UnknownHostException {
+    String name = config.get(DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY);
+    if (name == null) {
+      name = InetAddress.getLocalHost().getHostName();
+    }
+    return name;
+  }
+
   @Override
   protected void serviceStart() throws Exception {
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index e7fec9e..f34dc41 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_DEFAULT;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
 
@@ -27,6 +28,7 @@ import java.util.Set;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HDFSPolicyProvider;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
@@ -142,6 +144,11 @@ public class RouterAdminServer extends AbstractService
         .setVerbose(false)
         .build();
 
+    // Set service-level authorization security policy
+    if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
+      this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
+    }
+
     // The RPC-server port can be ephemeral... ensure we have the correct info
     InetSocketAddress listenAddress = this.adminServer.getListenerAddress();
     this.adminAddress = new InetSocketAddress(
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
index d223e2a..d6a5146 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.hdfs.server.federation.router;
 import java.net.InetSocketAddress;
 
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.server.common.JspHelper;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
@@ -84,8 +83,8 @@ public class RouterHttpServer extends AbstractService {
     String webApp = "router";
     HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(
         this.conf, this.httpAddress, this.httpsAddress, webApp,
-        DFSConfigKeys.DFS_NAMENODE_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY,
-        DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY);
+        RBFConfigKeys.DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY,
+        RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY);
 
     this.httpServer = builder.build();
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index 34f51ec..a21e980 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -255,7 +255,14 @@ public class RouterRpcClient {
       // for each individual request.
 
       // TODO Add tokens from the federated UGI
-      connection = this.connectionManager.getConnection(ugi, rpcAddress, proto);
+      UserGroupInformation connUGI = ugi;
+      if (UserGroupInformation.isSecurityEnabled()) {
+        UserGroupInformation routerUser = UserGroupInformation.getLoginUser();
+        connUGI = UserGroupInformation.createProxyUser(
+            ugi.getUserName(), routerUser);
+      }
+      connection = this.connectionManager.getConnection(
+          connUGI, rpcAddress, proto);
       LOG.debug("User {} NN {} is using connection {}",
           ugi.getUserName(), rpcAddress, connection);
     } catch (Exception ex) {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 36d3c81..fcb35f4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
 import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT;
@@ -61,6 +62,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.ha.HAServiceProtocol;
 import org.apache.hadoop.hdfs.AddBlockFlag;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.HDFSPolicyProvider;
 import org.apache.hadoop.hdfs.inotify.EventBatchList;
 import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
@@ -175,6 +177,9 @@ public class RouterRpcServer extends AbstractService
   /** Monitor metrics for the RPC calls. */
   private final RouterRpcMonitor rpcMonitor;
 
+  /** If we use authentication for the connections. */
+  private final boolean serviceAuthEnabled;
+
 
   /** Interface to identify the active NN for a nameservice or blockpool ID. */
   private final ActiveNamenodeResolver namenodeResolver;
@@ -266,6 +271,13 @@ public class RouterRpcServer extends AbstractService
     DFSUtil.addPBProtocol(
         conf, NamenodeProtocolPB.class, nnPbService, this.rpcServer);
 
+    // Set service-level authorization security policy
+    this.serviceAuthEnabled = conf.getBoolean(
+        HADOOP_SECURITY_AUTHORIZATION, false);
+    if (this.serviceAuthEnabled) {
+      rpcServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
+    }
+
     // We don't want the server to log the full stack trace for some exceptions
     this.rpcServer.addTerseExceptions(
         RemoteException.class,
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 3f56043..29c3093 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -465,4 +465,51 @@
     </description>
   </property>
 
+  <property>
+    <name>dfs.federation.router.keytab.file</name>
+    <value></value>
+    <description>
+      The keytab file used by router to login as its
+      service principal. The principal name is configured with
+      dfs.federation.router.kerberos.principal.
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.kerberos.principal</name>
+    <value></value>
+    <description>
+      The Router service principal. This is typically set to
+      router/_HOST@REALM.TLD. Each Router will substitute _HOST with its
+      own fully qualified hostname at startup. The _HOST placeholder
+      allows using the same configuration setting on both Router
+      in an HA setup.
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.kerberos.principal.hostname</name>
+    <value></value>
+    <description>
+      Optional.  The hostname for the Router containing this
+      configuration file.  Will be different for each machine.
+      Defaults to current hostname.
+    </description>
+  </property>
+
+  <property>
+    <name>dfs.federation.router.kerberos.internal.spnego.principal</name>
+    <value>${dfs.web.authentication.kerberos.principal}</value>
+    <description>
+      The server principal used by the Router for web UI SPNEGO
+      authentication when Kerberos security is enabled. This is
+      typically set to HTTP/_HOST@REALM.TLD The SPNEGO server principal
+      begins with the prefix HTTP/ by convention.
+
+      If the value is '*', the web server will attempt to login with
+      every principal specified in the keytab file
+      dfs.web.authentication.kerberos.keytab.
+    </description>
+  </property>
+
 </configuration>
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
index 97a426e..510cb95 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/RouterHDFSContract.java
@@ -43,12 +43,17 @@ public class RouterHDFSContract extends HDFSContract {
   }
 
   public static void createCluster() throws IOException {
+    createCluster(null);
+  }
+
+  public static void createCluster(Configuration conf) throws IOException {
     try {
-      cluster = new MiniRouterDFSCluster(true, 2);
+      cluster = new MiniRouterDFSCluster(true, 2, conf);
 
       // Start NNs and DNs and wait until ready
-      cluster.startCluster();
+      cluster.startCluster(conf);
 
+      cluster.addRouterOverrides(conf);
       // Start routers with only an RPC service
       cluster.startRouters();
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
new file mode 100644
index 0000000..deb6ace
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
@@ -0,0 +1,156 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_SERVICE_USER_NAME_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KERBEROS_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_KEYTAB_FILE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HTTP_POLICY_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_DATA_TRANSFER_PROTECTION_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_RPC_BIND_HOST_KEY;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
+import org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl;
+import org.apache.hadoop.http.HttpConfig;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
+import org.apache.hadoop.test.GenericTestUtils;
+
+/**
+ * Test utility to provide a standard routine to initialize the configuration
+ * for secure RBF HDFS cluster.
+ */
+public final class SecurityConfUtil {
+
+  // SSL keystore
+  private static String keystoresDir;
+  private static String sslConfDir;
+
+  // State string for mini dfs
+  private static final String SPNEGO_USER_NAME = "HTTP";
+  private static final String ROUTER_USER_NAME = "router";
+
+  private static String spnegoPrincipal;
+  private static String routerPrincipal;
+
+  private SecurityConfUtil() {
+    // Utility Class
+  }
+
+  public static Configuration initSecurity() throws Exception {
+    // delete old test dir
+    File baseDir = GenericTestUtils.getTestDir(
+        SecurityConfUtil.class.getSimpleName());
+    FileUtil.fullyDelete(baseDir);
+    assertTrue(baseDir.mkdirs());
+
+    // start a mini kdc with default conf
+    Properties kdcConf = MiniKdc.createConf();
+    MiniKdc kdc = new MiniKdc(kdcConf, baseDir);
+    kdc.start();
+
+    Configuration conf = new HdfsConfiguration();
+    SecurityUtil.setAuthenticationMethod(
+        UserGroupInformation.AuthenticationMethod.KERBEROS, conf);
+
+    UserGroupInformation.setConfiguration(conf);
+    assertTrue("Expected configuration to enable security",
+        UserGroupInformation.isSecurityEnabled());
+
+    // Setup the keytab
+    File keytabFile = new File(baseDir, "test.keytab");
+    String keytab = keytabFile.getAbsolutePath();
+
+    // Windows will not reverse name lookup "127.0.0.1" to "localhost".
+    String krbInstance = Path.WINDOWS ? "127.0.0.1" : "localhost";
+
+    kdc.createPrincipal(keytabFile,
+        SPNEGO_USER_NAME + "/" + krbInstance,
+        ROUTER_USER_NAME + "/" + krbInstance);
+
+    routerPrincipal =
+        ROUTER_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
+    spnegoPrincipal =
+        SPNEGO_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
+
+    // Set auth configuration for mini DFS
+    conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
+    conf.set(HADOOP_SECURITY_SERVICE_USER_NAME_KEY, routerPrincipal);
+
+    // Setup principles and keytabs for dfs
+    conf.set(DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
+    conf.set(DFS_NAMENODE_KEYTAB_FILE_KEY, keytab);
+    conf.set(DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
+    conf.set(DFS_DATANODE_KEYTAB_FILE_KEY, keytab);
+    conf.set(DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY, spnegoPrincipal);
+    conf.set(DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY, keytab);
+
+    conf.set(DFS_NAMENODE_HTTPS_ADDRESS_KEY, "localhost:0");
+    conf.set(DFS_DATANODE_HTTPS_ADDRESS_KEY, "localhost:0");
+
+    conf.setBoolean(DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY, true);
+    conf.set(DFS_DATA_TRANSFER_PROTECTION_KEY, "authentication");
+    conf.set(DFS_HTTP_POLICY_KEY, HttpConfig.Policy.HTTPS_ONLY.name());
+
+    // Setup SSL configuration
+    keystoresDir = baseDir.getAbsolutePath();
+    sslConfDir = KeyStoreTestUtil.getClasspathDir(
+        SecurityConfUtil.class);
+    KeyStoreTestUtil.setupSSLConfig(
+        keystoresDir, sslConfDir, conf, false);
+    conf.set(DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY,
+        KeyStoreTestUtil.getClientSSLConfigFileName());
+    conf.set(DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_KEY,
+        KeyStoreTestUtil.getServerSSLConfigFileName());
+
+    // Setup principals and keytabs for router
+    conf.set(DFS_ROUTER_KEYTAB_FILE_KEY, keytab);
+    conf.set(DFS_ROUTER_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
+    conf.set(DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY, "*");
+
+    // Setup basic state store
+    conf.setClass(RBFConfigKeys.FEDERATION_STORE_DRIVER_CLASS,
+        StateStoreFileImpl.class, StateStoreDriver.class);
+
+    // We need to specify the host to prevent 0.0.0.0 as the host address
+    conf.set(DFS_ROUTER_RPC_BIND_HOST_KEY, "localhost");
+
+    return conf;
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractAppendSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractAppendSecure.java
new file mode 100644
index 0000000..fe4951d
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractAppendSecure.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractAppendTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+/**
+ * Test secure append operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractAppendSecure
+    extends AbstractContractAppendTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractConcatSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractConcatSecure.java
new file mode 100644
index 0000000..c9a0cc8
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractConcatSecure.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.AbstractContractConcatTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure concat operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractConcatSecure
+    extends AbstractContractConcatTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+    // perform a simple operation on the cluster to verify it is up
+    RouterHDFSContract.getFileSystem().getDefaultBlockSize(new Path("/"));
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractCreateSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractCreateSecure.java
new file mode 100644
index 0000000..dc264b0
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractCreateSecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractCreateTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure create operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractCreateSecure
+    extends AbstractContractCreateTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDeleteSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDeleteSecure.java
new file mode 100644
index 0000000..57cc138
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractDeleteSecure.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractDeleteTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+/**
+ * Test secure delete operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractDeleteSecure
+    extends AbstractContractDeleteTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractGetFileStatusSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractGetFileStatusSecure.java
new file mode 100644
index 0000000..13e4e96
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractGetFileStatusSecure.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure get file status operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractGetFileStatusSecure
+    extends AbstractContractGetFileStatusTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractMkdirSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractMkdirSecure.java
new file mode 100644
index 0000000..7c223a6
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractMkdirSecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractMkdirTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure dir operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractMkdirSecure
+    extends AbstractContractMkdirTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractOpenSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractOpenSecure.java
new file mode 100644
index 0000000..434402c
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractOpenSecure.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractOpenTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure open operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractOpenSecure extends AbstractContractOpenTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRenameSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRenameSecure.java
new file mode 100644
index 0000000..29d7398
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRenameSecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRenameTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure rename operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractRenameSecure
+    extends AbstractContractRenameTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRootDirectorySecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRootDirectorySecure.java
new file mode 100644
index 0000000..32ec161
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractRootDirectorySecure.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure root dir operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractRootDirectorySecure
+    extends AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() throws IOException {
+    // It doesn't apply because we still have the mount points here
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() throws IOException {
+    // It doesn't apply because we still have the mount points here
+  }
+
+  @Override
+  public void testRecursiveRootListing() throws IOException {
+    // It doesn't apply because we still have the mount points here
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSeekSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSeekSecure.java
new file mode 100644
index 0000000..f281b47
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSeekSecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure seek operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractSeekSecure extends AbstractContractSeekTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSetTimesSecure.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSetTimesSecure.java
new file mode 100644
index 0000000..8f86b95
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/TestRouterHDFSContractSetTimesSecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *   you may not use this file except in compliance with the License.
+ *   You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.fs.contract.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractSetTimesTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.fs.contract.router.SecurityConfUtil.initSecurity;
+
+
+/**
+ * Test secure set times operations on the Router-based FS.
+ */
+public class TestRouterHDFSContractSetTimesSecure
+    extends AbstractContractSetTimesTest {
+
+  @BeforeClass
+  public static void createCluster() throws Exception {
+    RouterHDFSContract.createCluster(initSecurity());
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    RouterHDFSContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RouterHDFSContract(conf);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index e34713d..a5693a6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -28,6 +28,8 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SERVICE_RPC_ADDR
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SERVICE_RPC_BIND_HOST_KEY;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICES;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_HTTP_POLICY_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_HTTPS_ADDRESS_KEY;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMENODES;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.addDirectory;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.waitNamenodeRegistered;
@@ -85,6 +87,7 @@ import org.apache.hadoop.hdfs.server.federation.router.RouterClient;
 import org.apache.hadoop.hdfs.server.namenode.FSImage;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.Service.STATE;
@@ -270,6 +273,7 @@ public class MiniRouterDFSCluster {
     private int servicePort;
     private int lifelinePort;
     private int httpPort;
+    private int httpsPort;
     private URI fileSystemUri;
     private int index;
     private DFSClient client;
@@ -305,7 +309,12 @@ public class MiniRouterDFSCluster {
       this.rpcPort = nn.getNameNodeAddress().getPort();
       this.servicePort = nn.getServiceRpcAddress().getPort();
       this.lifelinePort = nn.getServiceRpcAddress().getPort();
-      this.httpPort = nn.getHttpAddress().getPort();
+      if (nn.getHttpAddress() != null) {
+        this.httpPort = nn.getHttpAddress().getPort();
+      }
+      if (nn.getHttpsAddress() != null) {
+        this.httpsPort = nn.getHttpsAddress().getPort();
+      }
       this.fileSystemUri = new URI("hdfs://" + namenode.getHostAndPort());
       DistributedFileSystem.setDefaultUri(this.conf, this.fileSystemUri);
 
@@ -328,10 +337,22 @@ public class MiniRouterDFSCluster {
       return namenode.getServiceRpcAddress().getHostName() + ":" + lifelinePort;
     }
 
+    public String getWebAddress() {
+      if (conf.get(DFS_HTTP_POLICY_KEY)
+          .equals(HttpConfig.Policy.HTTPS_ONLY.name())) {
+        return getHttpsAddress();
+      }
+      return getHttpAddress();
+    }
+
     public String getHttpAddress() {
       return namenode.getHttpAddress().getHostName() + ":" + httpPort;
     }
 
+    public String getHttpsAddress() {
+      return namenode.getHttpsAddress().getHostName() + ":" + httpsPort;
+    }
+
     public FileSystem getFileSystem() throws IOException {
       return DistributedFileSystem.get(conf);
     }
@@ -375,22 +396,38 @@ public class MiniRouterDFSCluster {
 
   public MiniRouterDFSCluster(
       boolean ha, int numNameservices, int numNamenodes,
-      long heartbeatInterval, long cacheFlushInterval) {
+      long heartbeatInterval, long cacheFlushInterval,
+      Configuration overrideConf) {
     this.highAvailability = ha;
     this.heartbeatInterval = heartbeatInterval;
     this.cacheFlushInterval = cacheFlushInterval;
-    configureNameservices(numNameservices, numNamenodes);
+    configureNameservices(numNameservices, numNamenodes, overrideConf);
+  }
+
+  public MiniRouterDFSCluster(
+      boolean ha, int numNameservices, int numNamenodes,
+      long heartbeatInterval, long cacheFlushInterval) {
+    this(ha, numNameservices, numNamenodes,
+        heartbeatInterval, cacheFlushInterval, null);
   }
 
   public MiniRouterDFSCluster(boolean ha, int numNameservices) {
     this(ha, numNameservices, 2,
-        DEFAULT_HEARTBEAT_INTERVAL_MS, DEFAULT_CACHE_INTERVAL_MS);
+        DEFAULT_HEARTBEAT_INTERVAL_MS, DEFAULT_CACHE_INTERVAL_MS,
+        null);
   }
 
   public MiniRouterDFSCluster(
       boolean ha, int numNameservices, int numNamenodes) {
     this(ha, numNameservices, numNamenodes,
-        DEFAULT_HEARTBEAT_INTERVAL_MS, DEFAULT_CACHE_INTERVAL_MS);
+        DEFAULT_HEARTBEAT_INTERVAL_MS, DEFAULT_CACHE_INTERVAL_MS,
+        null);
+  }
+
+  public MiniRouterDFSCluster(boolean ha, int numNameservices,
+      Configuration overrideConf) {
+    this(ha, numNameservices, 2,
+        DEFAULT_HEARTBEAT_INTERVAL_MS, DEFAULT_CACHE_INTERVAL_MS, overrideConf);
   }
 
   /**
@@ -447,6 +484,8 @@ public class MiniRouterDFSCluster {
             "127.0.0.1:" + context.httpPort);
         conf.set(DFS_NAMENODE_RPC_BIND_HOST_KEY + "." + suffix,
             "0.0.0.0");
+        conf.set(DFS_NAMENODE_HTTPS_ADDRESS_KEY + "." + suffix,
+            "127.0.0.1:" + context.httpsPort);
 
         // If the service port is enabled by default, we need to set them up
         boolean servicePortEnabled = false;
@@ -543,7 +582,8 @@ public class MiniRouterDFSCluster {
     return conf;
   }
 
-  public void configureNameservices(int numNameservices, int numNamenodes) {
+  public void configureNameservices(int numNameservices, int numNamenodes,
+      Configuration overrideConf) {
     this.nameservices = new ArrayList<>();
     this.namenodes = new ArrayList<>();
 
@@ -554,6 +594,10 @@ public class MiniRouterDFSCluster {
       this.nameservices.add("ns" + i);
 
       Configuration nnConf = generateNamenodeConfiguration(ns);
+      if (overrideConf != null) {
+        nnConf.addResource(overrideConf);
+      }
+
       if (!highAvailability) {
         context = new NamenodeContext(nnConf, ns, null, nnIndex++);
         this.namenodes.add(context);
@@ -788,7 +832,7 @@ public class MiniRouterDFSCluster {
         NamenodeStatusReport report = new NamenodeStatusReport(
             nn.nameserviceId, nn.namenodeId,
             nn.getRpcAddress(), nn.getServiceAddress(),
-            nn.getLifelineAddress(), nn.getHttpAddress());
+            nn.getLifelineAddress(), nn.getWebAddress());
         FSImage fsImage = nn.namenode.getNamesystem().getFSImage();
         NamespaceInfo nsInfo = fsImage.getStorage().getNamespaceInfo();
         report.setNamespaceInfo(nsInfo);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 36/41: HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d815bc7ce05834c398f12527f1cb5f4d4113f8da
Author: Giovanni Matteo Fumarola <gi...@apache.org>
AuthorDate: Tue Feb 5 10:40:28 2019 -0800

    HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena.
---
 .../server/federation/router/RouterRpcClient.java  | 13 ++++---
 .../federation/router/TestRouterMountTable.java    | 41 ++++++++++++++--------
 2 files changed, 36 insertions(+), 18 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index 0b15333..f5985ee 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -1042,10 +1042,15 @@ public class RouterRpcClient {
       String ns = location.getNameserviceId();
       final List<? extends FederationNamenodeContext> namenodes =
           getNamenodesForNameservice(ns);
-      Class<?> proto = method.getProtocol();
-      Object[] paramList = method.getParams(location);
-      Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
-      return Collections.singletonMap(location, (R) result);
+      try {
+        Class<?> proto = method.getProtocol();
+        Object[] paramList = method.getParams(location);
+        Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
+        return Collections.singletonMap(location, (R) result);
+      } catch (IOException ioe) {
+        // Localize the exception
+        throw processException(ioe, location);
+      }
     }
 
     List<T> orderedLocations = new LinkedList<>();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
index 9538d71..4f6f702 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.Collections;
 import java.util.HashMap;
@@ -43,12 +44,14 @@ import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
 import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.AfterClass;
@@ -69,6 +72,7 @@ public class TestRouterMountTable {
   private static long startTime;
   private static FileSystem nnFs0;
   private static FileSystem nnFs1;
+  private static FileSystem routerFs;
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
@@ -92,6 +96,7 @@ public class TestRouterMountTable {
     nnFs0 = nnContext0.getFileSystem();
     nnFs1 = nnContext1.getFileSystem();
     routerContext = cluster.getRandomRouter();
+    routerFs = routerContext.getFileSystem();
     Router router = routerContext.getRouter();
     routerProtocol = routerContext.getClient().getNamenode();
     mountTable = (MountTableResolver) router.getSubclusterResolver();
@@ -136,7 +141,6 @@ public class TestRouterMountTable {
     assertTrue(addMountTable(regularEntry));
 
     // Create a folder which should show in all locations
-    final FileSystem routerFs = routerContext.getFileSystem();
     assertTrue(routerFs.mkdirs(new Path("/regular/newdir")));
 
     FileStatus dirStatusNn =
@@ -261,7 +265,7 @@ public class TestRouterMountTable {
     addEntry.setOwnerName("owner1");
     addEntry.setMode(FsPermission.createImmutable((short) 0775));
     assertTrue(addMountTable(addEntry));
-    FileStatus[] list = routerContext.getFileSystem().listStatus(new Path("/"));
+    FileStatus[] list = routerFs.listStatus(new Path("/"));
     assertEquals("group1", list[0].getGroup());
     assertEquals("owner1", list[0].getOwner());
     assertEquals((short) 0775, list[0].getPermission().toShort());
@@ -282,8 +286,7 @@ public class TestRouterMountTable {
       nnFs0.setOwner(new Path("/tmp/testdir"), "Aowner", "Agroup");
       nnFs0.setPermission(new Path("/tmp/testdir"),
           FsPermission.createImmutable((short) 775));
-      FileStatus[] list =
-          routerContext.getFileSystem().listStatus(new Path("/"));
+      FileStatus[] list = routerFs.listStatus(new Path("/"));
       assertEquals("Agroup", list[0].getGroup());
       assertEquals("Aowner", list[0].getOwner());
       assertEquals((short) 775, list[0].getPermission().toShort());
@@ -313,8 +316,7 @@ public class TestRouterMountTable {
       nnFs1.setOwner(new Path("/tmp/testdir01"), "Aowner", "Agroup");
       nnFs1.setPermission(new Path("/tmp/testdir01"),
           FsPermission.createImmutable((short) 775));
-      FileStatus[] list =
-          routerContext.getFileSystem().listStatus(new Path("/"));
+      FileStatus[] list = routerFs.listStatus(new Path("/"));
       assertEquals("Agroup", list[0].getGroup());
       assertEquals("Aowner", list[0].getOwner());
       assertEquals((short) 775, list[0].getPermission().toShort());
@@ -347,8 +349,7 @@ public class TestRouterMountTable {
       nnFs1.setOwner(new Path("/tmp/testdir01"), "Aowner01", "Agroup01");
       nnFs1.setPermission(new Path("/tmp/testdir01"),
           FsPermission.createImmutable((short) 755));
-      FileStatus[] list =
-          routerContext.getFileSystem().listStatus(new Path("/"));
+      FileStatus[] list = routerFs.listStatus(new Path("/"));
       assertTrue("Agroup".equals(list[0].getGroup())
           || "Agroup01".equals(list[0].getGroup()));
       assertTrue("Aowner".equals(list[0].getOwner())
@@ -374,8 +375,7 @@ public class TestRouterMountTable {
     addEntry.setOwnerName("owner1");
     assertTrue(addMountTable(addEntry));
     HdfsFileStatus finfo = routerProtocol.getFileInfo("/testdir");
-    FileStatus[] finfo1 =
-        routerContext.getFileSystem().listStatus(new Path("/"));
+    FileStatus[] finfo1 = routerFs.listStatus(new Path("/"));
     assertEquals("owner1", finfo.getOwner());
     assertEquals("owner1", finfo1[0].getOwner());
     assertEquals("group1", finfo.getGroup());
@@ -395,8 +395,7 @@ public class TestRouterMountTable {
       nnFs0.mkdirs(new Path("/tmp/testdir"));
       nnFs0.mkdirs(new Path("/tmp/testdir/1"));
       nnFs0.mkdirs(new Path("/tmp/testdir/2"));
-      FileStatus[] finfo1 =
-          routerContext.getFileSystem().listStatus(new Path("/"));
+      FileStatus[] finfo1 = routerFs.listStatus(new Path("/"));
       assertEquals(2, ((HdfsFileStatus) finfo1[0]).getChildrenNum());
     } finally {
       nnFs0.delete(new Path("/tmp"), true);
@@ -421,12 +420,26 @@ public class TestRouterMountTable {
       nnFs1.mkdirs(new Path("/tmp/testdir01"));
       nnFs0.mkdirs(new Path("/tmp/testdir/1"));
       nnFs1.mkdirs(new Path("/tmp/testdir01/1"));
-      FileStatus[] finfo1 =
-          routerContext.getFileSystem().listStatus(new Path("/"));
+      FileStatus[] finfo1 = routerFs.listStatus(new Path("/"));
       assertEquals(2, ((HdfsFileStatus) finfo1[0]).getChildrenNum());
     } finally {
       nnFs0.delete(new Path("/tmp"), true);
       nnFs0.delete(new Path("/tmp"), true);
     }
   }
+
+  /**
+   * Validates the path in the exception. The path should be with respect to the
+   * mount not with respect to the sub cluster.
+   */
+  @Test
+  public void testPathInException() throws Exception {
+    MountTable addEntry = MountTable.newInstance("/mount",
+        Collections.singletonMap("ns0", "/tmp/testdir"));
+    addEntry.setDestOrder(DestinationOrder.HASH_ALL);
+    assertTrue(addMountTable(addEntry));
+    LambdaTestUtils.intercept(FileNotFoundException.class,
+        "Directory/File does not exist /mount/file",
+        () -> routerFs.setOwner(new Path("/mount/file"), "user", "group"));
+  }
 }
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 17/41: HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ce9351ab83fc22db184850da57f219e646f1b0a9
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Mon Dec 17 12:35:07 2018 +0800

    HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.
---
 .../federation/metrics/NamenodeBeanMetrics.java    | 149 ++++++++++++++++++---
 .../hdfs/server/federation/router/Router.java      |   8 +-
 .../hdfs/server/federation/router/TestRouter.java  |  14 ++
 3 files changed, 147 insertions(+), 24 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 64df10c..25ec27c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -168,8 +168,12 @@ public class NamenodeBeanMetrics
     }
   }
 
-  private FederationMetrics getFederationMetrics() {
-    return this.router.getMetrics();
+  private FederationMetrics getFederationMetrics() throws IOException {
+    FederationMetrics metrics = getRouter().getMetrics();
+    if (metrics == null) {
+      throw new IOException("Federated metrics is not initialized");
+    }
+    return metrics;
   }
 
   /////////////////////////////////////////////////////////
@@ -188,22 +192,42 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getUsed() {
-    return getFederationMetrics().getUsedCapacity();
+    try {
+      return getFederationMetrics().getUsedCapacity();
+    } catch (IOException e) {
+      LOG.debug("Failed to get the used capacity", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getFree() {
-    return getFederationMetrics().getRemainingCapacity();
+    try {
+      return getFederationMetrics().getRemainingCapacity();
+    } catch (IOException e) {
+      LOG.debug("Failed to get remaining capacity", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getTotal() {
-    return getFederationMetrics().getTotalCapacity();
+    try {
+      return getFederationMetrics().getTotalCapacity();
+    } catch (IOException e) {
+      LOG.debug("Failed to Get total capacity", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getProvidedCapacity() {
-    return getFederationMetrics().getProvidedSpace();
+    try {
+      return getFederationMetrics().getProvidedSpace();
+    } catch (IOException e) {
+      LOG.debug("Failed to get provided capacity", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -261,39 +285,79 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getTotalBlocks() {
-    return getFederationMetrics().getNumBlocks();
+    try {
+      return getFederationMetrics().getNumBlocks();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getNumberOfMissingBlocks() {
-    return getFederationMetrics().getNumOfMissingBlocks();
+    try {
+      return getFederationMetrics().getNumOfMissingBlocks();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of missing blocks", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   @Deprecated
   public long getPendingReplicationBlocks() {
-    return getFederationMetrics().getNumOfBlocksPendingReplication();
+    try {
+      return getFederationMetrics().getNumOfBlocksPendingReplication();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks pending replica",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getPendingReconstructionBlocks() {
-    return getFederationMetrics().getNumOfBlocksPendingReplication();
+    try {
+      return getFederationMetrics().getNumOfBlocksPendingReplication();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks pending replica",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   @Deprecated
   public long getUnderReplicatedBlocks() {
-    return getFederationMetrics().getNumOfBlocksUnderReplicated();
+    try {
+      return getFederationMetrics().getNumOfBlocksUnderReplicated();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks under replicated",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getLowRedundancyBlocks() {
-    return getFederationMetrics().getNumOfBlocksUnderReplicated();
+    try {
+      return getFederationMetrics().getNumOfBlocksUnderReplicated();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks under replicated",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public long getPendingDeletionBlocks() {
-    return getFederationMetrics().getNumOfBlocksPendingDeletion();
+    try {
+      return getFederationMetrics().getNumOfBlocksPendingDeletion();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of blocks pending deletion",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -466,7 +530,12 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getNNStartedTimeInMillis() {
-    return this.router.getStartTime();
+    try {
+      return getRouter().getStartTime();
+    } catch (IOException e) {
+      LOG.debug("Failed to get the router startup time", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -522,7 +591,12 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getFilesTotal() {
-    return getFederationMetrics().getNumFiles();
+    try {
+      return getFederationMetrics().getNumFiles();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of files", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -532,12 +606,22 @@ public class NamenodeBeanMetrics
 
   @Override
   public int getNumLiveDataNodes() {
-    return this.router.getMetrics().getNumLiveNodes();
+    try {
+      return getFederationMetrics().getNumLiveNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of live nodes", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public int getNumDeadDataNodes() {
-    return this.router.getMetrics().getNumDeadNodes();
+    try {
+      return getFederationMetrics().getNumDeadNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of dead nodes", e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -547,17 +631,35 @@ public class NamenodeBeanMetrics
 
   @Override
   public int getNumDecomLiveDataNodes() {
-    return this.router.getMetrics().getNumDecomLiveNodes();
+    try {
+      return getFederationMetrics().getNumDecomLiveNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get the number of live decommissioned datanodes",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public int getNumDecomDeadDataNodes() {
-    return this.router.getMetrics().getNumDecomDeadNodes();
+    try {
+      return getFederationMetrics().getNumDecomDeadNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get the number of dead decommissioned datanodes",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
   public int getNumDecommissioningDataNodes() {
-    return this.router.getMetrics().getNumDecommissioningNodes();
+    try {
+      return getFederationMetrics().getNumDecommissioningNodes();
+    } catch (IOException e) {
+      LOG.debug("Failed to get number of decommissioning nodes",
+          e.getMessage());
+    }
+    return 0;
   }
 
   @Override
@@ -697,4 +799,11 @@ public class NamenodeBeanMetrics
   public String getVerifyECWithTopologyResult() {
     return null;
   }
+
+  private Router getRouter() throws IOException {
+    if (this.router == null) {
+      throw new IOException("Router is not initialized");
+    }
+    return this.router;
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 3288273..3182e27 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -586,11 +586,11 @@ public class Router extends CompositeService {
    *
    * @return Namenode metrics.
    */
-  public NamenodeBeanMetrics getNamenodeMetrics() {
-    if (this.metrics != null) {
-      return this.metrics.getNamenodeMetrics();
+  public NamenodeBeanMetrics getNamenodeMetrics() throws IOException {
+    if (this.metrics == null) {
+      throw new IOException("Namenode metrics is not initialized");
     }
-    return null;
+    return this.metrics.getNamenodeMetrics();
   }
 
   /**
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouter.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouter.java
index db4be29..f83cfda 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouter.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouter.java
@@ -203,4 +203,18 @@ public class TestRouter {
     router.stop();
     router.close();
   }
+
+  @Test
+  public void testRouterMetricsWhenDisabled() throws Exception {
+
+    Router router = new Router();
+    router.init(new RouterConfigBuilder(conf).rpc().build());
+    router.start();
+
+    intercept(IOException.class, "Namenode metrics is not initialized",
+        () -> router.getNamenodeMetrics().getCacheCapacity());
+
+    router.stop();
+    router.close();
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 39/41: HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1645df92999a2940c19cde4cdddfb12c93cf9e84
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Fri Feb 15 09:25:09 2019 -0800

    HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena.
---
 .../server/federation/router/ErasureCoding.java    |  12 +-
 .../federation/router/RouterClientProtocol.java    |  55 ++-
 .../server/federation/router/RouterRpcServer.java  |  46 ++-
 .../federation/router/RouterStoragePolicy.java     |  12 +-
 ...erRPCMultipleDestinationMountTableResolver.java | 394 +++++++++++++++++++++
 5 files changed, 482 insertions(+), 37 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
index 480b232..f4584b1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
@@ -157,7 +157,11 @@ public class ErasureCoding {
     RemoteMethod remoteMethod = new RemoteMethod("setErasureCodingPolicy",
         new Class<?>[] {String.class, String.class},
         new RemoteParam(), ecPolicyName);
-    rpcClient.invokeSequential(locations, remoteMethod, null, null);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, remoteMethod);
+    } else {
+      rpcClient.invokeSequential(locations, remoteMethod);
+    }
   }
 
   public void unsetErasureCodingPolicy(String src) throws IOException {
@@ -167,7 +171,11 @@ public class ErasureCoding {
         rpcServer.getLocationsForPath(src, true);
     RemoteMethod remoteMethod = new RemoteMethod("unsetErasureCodingPolicy",
         new Class<?>[] {String.class}, new RemoteParam());
-    rpcClient.invokeSequential(locations, remoteMethod, null, null);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, remoteMethod);
+    } else {
+      rpcClient.invokeSequential(locations, remoteMethod);
+    }
   }
 
   public ECBlockGroupStats getECBlockGroupStats() throws IOException {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 5383a7d..6cc12ca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -213,7 +213,7 @@ public class RouterClientProtocol implements ClientProtocol {
       throws IOException {
     rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
 
-    if (createParent && isPathAll(src)) {
+    if (createParent && rpcServer.isPathAll(src)) {
       int index = src.lastIndexOf(Path.SEPARATOR);
       String parent = src.substring(0, index);
       LOG.debug("Creating {} requires creating parent {}", src, parent);
@@ -273,9 +273,13 @@ public class RouterClientProtocol implements ClientProtocol {
     RemoteMethod method = new RemoteMethod("setReplication",
         new Class<?>[] {String.class, short.class}, new RemoteParam(),
         replication);
-    Object result = rpcClient.invokeSequential(
-        locations, method, Boolean.class, Boolean.TRUE);
-    return (boolean) result;
+    if (rpcServer.isInvokeConcurrent(src)) {
+      return !rpcClient.invokeConcurrent(locations, method, Boolean.class)
+          .containsValue(false);
+    } else {
+      return rpcClient.invokeSequential(locations, method, Boolean.class,
+          Boolean.TRUE);
+    }
   }
 
   @Override
@@ -299,7 +303,7 @@ public class RouterClientProtocol implements ClientProtocol {
     RemoteMethod method = new RemoteMethod("setPermission",
         new Class<?>[] {String.class, FsPermission.class},
         new RemoteParam(), permissions);
-    if (isPathAll(src)) {
+    if (rpcServer.isInvokeConcurrent(src)) {
       rpcClient.invokeConcurrent(locations, method);
     } else {
       rpcClient.invokeSequential(locations, method);
@@ -316,7 +320,7 @@ public class RouterClientProtocol implements ClientProtocol {
     RemoteMethod method = new RemoteMethod("setOwner",
         new Class<?>[] {String.class, String.class, String.class},
         new RemoteParam(), username, groupname);
-    if (isPathAll(src)) {
+    if (rpcServer.isInvokeConcurrent(src)) {
       rpcClient.invokeConcurrent(locations, method);
     } else {
       rpcClient.invokeSequential(locations, method);
@@ -549,7 +553,7 @@ public class RouterClientProtocol implements ClientProtocol {
     RemoteMethod method = new RemoteMethod("delete",
         new Class<?>[] {String.class, boolean.class}, new RemoteParam(),
         recursive);
-    if (isPathAll(src)) {
+    if (rpcServer.isPathAll(src)) {
       return rpcClient.invokeAll(locations, method);
     } else {
       return rpcClient.invokeSequential(locations, method,
@@ -569,7 +573,7 @@ public class RouterClientProtocol implements ClientProtocol {
         new RemoteParam(), masked, createParent);
 
     // Create in all locations
-    if (isPathAll(src)) {
+    if (rpcServer.isPathAll(src)) {
       return rpcClient.invokeAll(locations, method);
     }
 
@@ -707,7 +711,7 @@ public class RouterClientProtocol implements ClientProtocol {
 
     HdfsFileStatus ret = null;
     // If it's a directory, we check in all locations
-    if (isPathAll(src)) {
+    if (rpcServer.isPathAll(src)) {
       ret = getFileInfoAll(locations, method);
     } else {
       // Check for file information sequentially
@@ -1309,7 +1313,11 @@ public class RouterClientProtocol implements ClientProtocol {
     RemoteMethod method = new RemoteMethod("setXAttr",
         new Class<?>[] {String.class, XAttr.class, EnumSet.class},
         new RemoteParam(), xAttr, flag);
-    rpcClient.invokeSequential(locations, method);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, method);
+    } else {
+      rpcClient.invokeSequential(locations, method);
+    }
   }
 
   @SuppressWarnings("unchecked")
@@ -1350,7 +1358,11 @@ public class RouterClientProtocol implements ClientProtocol {
         rpcServer.getLocationsForPath(src, true);
     RemoteMethod method = new RemoteMethod("removeXAttr",
         new Class<?>[] {String.class, XAttr.class}, new RemoteParam(), xAttr);
-    rpcClient.invokeSequential(locations, method);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, method);
+    } else {
+      rpcClient.invokeSequential(locations, method);
+    }
   }
 
   @Override
@@ -1713,27 +1725,6 @@ public class RouterClientProtocol implements ClientProtocol {
   }
 
   /**
-   * Check if a path should be in all subclusters.
-   *
-   * @param path Path to check.
-   * @return If a path should be in all subclusters.
-   */
-  private boolean isPathAll(final String path) {
-    if (subclusterResolver instanceof MountTableResolver) {
-      try {
-        MountTableResolver mountTable = (MountTableResolver)subclusterResolver;
-        MountTable entry = mountTable.getMountPoint(path);
-        if (entry != null) {
-          return entry.isAll();
-        }
-      } catch (IOException e) {
-        LOG.error("Cannot get mount point", e);
-      }
-    }
-    return false;
-  }
-
-  /**
    * Create a new file status for a mount point.
    *
    * @param name Name of the mount point.
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index a312d4b..e4ea58b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -1541,4 +1541,48 @@ public class RouterRpcServer extends AbstractService
   public FederationRPCMetrics getRPCMetrics() {
     return this.rpcMonitor.getRPCMetrics();
   }
-}
+
+  /**
+   * Check if a path should be in all subclusters.
+   *
+   * @param path Path to check.
+   * @return If a path should be in all subclusters.
+   */
+  boolean isPathAll(final String path) {
+    if (subclusterResolver instanceof MountTableResolver) {
+      try {
+        MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
+        MountTable entry = mountTable.getMountPoint(path);
+        if (entry != null) {
+          return entry.isAll();
+        }
+      } catch (IOException e) {
+        LOG.error("Cannot get mount point", e);
+      }
+    }
+    return false;
+  }
+
+  /**
+   * Check if call needs to be invoked to all the locations. The call is
+   * supposed to be invoked in all the locations in case the order of the mount
+   * entry is amongst HASH_ALL, RANDOM or SPACE or if the source is itself a
+   * mount entry.
+   * @param path The path on which the operation need to be invoked.
+   * @return true if the call is supposed to invoked on all locations.
+   * @throws IOException
+   */
+  boolean isInvokeConcurrent(final String path) throws IOException {
+    if (subclusterResolver instanceof MountTableResolver) {
+      MountTableResolver mountTableResolver =
+          (MountTableResolver) subclusterResolver;
+      List<String> mountPoints = mountTableResolver.getMountPoints(path);
+      // If this is a mount point, we need to invoke everywhere.
+      if (mountPoints != null) {
+        return true;
+      }
+      return isPathAll(path);
+    }
+    return false;
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
index 8a55b9a..a4538b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
@@ -50,7 +50,11 @@ public class RouterStoragePolicy {
         new Class<?>[] {String.class, String.class},
         new RemoteParam(),
         policyName);
-    rpcClient.invokeSequential(locations, method, null, null);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, method);
+    } else {
+      rpcClient.invokeSequential(locations, method);
+    }
   }
 
   public BlockStoragePolicy[] getStoragePolicies() throws IOException {
@@ -67,7 +71,11 @@ public class RouterStoragePolicy {
     RemoteMethod method = new RemoteMethod("unsetStoragePolicy",
         new Class<?>[] {String.class},
         new RemoteParam());
-    rpcClient.invokeSequential(locations, method);
+    if (rpcServer.isInvokeConcurrent(src)) {
+      rpcClient.invokeConcurrent(locations, method);
+    } else {
+      rpcClient.invokeSequential(locations, method);
+    }
   }
 
   public BlockStoragePolicy getStoragePolicy(String path)
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
new file mode 100644
index 0000000..8c15151
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
@@ -0,0 +1,394 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests router rpc with multiple destination mount table resolver.
+ */
+public class TestRouterRPCMultipleDestinationMountTableResolver {
+  private static StateStoreDFSCluster cluster;
+  private static RouterContext routerContext;
+  private static MountTableResolver resolver;
+  private static DistributedFileSystem nnFs0;
+  private static DistributedFileSystem nnFs1;
+  private static DistributedFileSystem routerFs;
+  private static RouterRpcServer rpcServer;
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+
+    // Build and start a federated cluster
+    cluster = new StateStoreDFSCluster(false, 2,
+        MultipleDestinationMountTableResolver.class);
+    Configuration routerConf =
+        new RouterConfigBuilder().stateStore().admin().quota().rpc().build();
+
+    Configuration hdfsConf = new Configuration(false);
+
+    cluster.addRouterOverrides(routerConf);
+    cluster.addNamenodeOverrides(hdfsConf);
+    cluster.startCluster();
+    cluster.startRouters();
+    cluster.waitClusterUp();
+
+    routerContext = cluster.getRandomRouter();
+    resolver =
+        (MountTableResolver) routerContext.getRouter().getSubclusterResolver();
+    nnFs0 = (DistributedFileSystem) cluster
+        .getNamenode(cluster.getNameservices().get(0), null).getFileSystem();
+    nnFs1 = (DistributedFileSystem) cluster
+        .getNamenode(cluster.getNameservices().get(1), null).getFileSystem();
+    routerFs = (DistributedFileSystem) routerContext.getFileSystem();
+    rpcServer =routerContext.getRouter().getRpcServer();
+  }
+
+  @AfterClass
+  public static void tearDown() {
+    if (cluster != null) {
+      cluster.stopRouter(routerContext);
+      cluster.shutdown();
+      cluster = null;
+    }
+  }
+
+  /**
+   * SetUp the mount entry , directories and file to verify invocation.
+   * @param order The order that the mount entry needs to follow.
+   * @throws Exception On account of any exception encountered during setting up
+   *           the environment.
+   */
+  public void setupOrderMountPath(DestinationOrder order) throws Exception {
+    Map<String, String> destMap = new HashMap<>();
+    destMap.put("ns0", "/tmp");
+    destMap.put("ns1", "/tmp");
+    nnFs0.mkdirs(new Path("/tmp"));
+    nnFs1.mkdirs(new Path("/tmp"));
+    MountTable addEntry = MountTable.newInstance("/mount", destMap);
+    addEntry.setDestOrder(order);
+    assertTrue(addMountTable(addEntry));
+    routerFs.mkdirs(new Path("/mount/dir/dir"));
+    DFSTestUtil.createFile(routerFs, new Path("/mount/dir/file"), 100L, (short) 1,
+        1024L);
+    DFSTestUtil.createFile(routerFs, new Path("/mount/file"), 100L, (short) 1,
+        1024L);
+  }
+
+  @After
+  public void resetTestEnvironment() throws IOException {
+    RouterClient client = routerContext.getAdminClient();
+    MountTableManager mountTableManager = client.getMountTableManager();
+    RemoveMountTableEntryRequest req2 =
+        RemoveMountTableEntryRequest.newInstance("/mount");
+    mountTableManager.removeMountTableEntry(req2);
+    nnFs0.delete(new Path("/tmp"), true);
+    nnFs1.delete(new Path("/tmp"), true);
+
+  }
+
+  @Test
+  public void testInvocationSpaceOrder() throws Exception {
+    setupOrderMountPath(DestinationOrder.SPACE);
+    boolean isDirAll = rpcServer.isPathAll("/mount/dir");
+    assertTrue(isDirAll);
+    testInvocation(isDirAll);
+  }
+
+  @Test
+  public void testInvocationHashAllOrder() throws Exception {
+    setupOrderMountPath(DestinationOrder.HASH_ALL);
+    boolean isDirAll = rpcServer.isPathAll("/mount/dir");
+    assertTrue(isDirAll);
+    testInvocation(isDirAll);
+  }
+
+  @Test
+  public void testInvocationRandomOrder() throws Exception {
+    setupOrderMountPath(DestinationOrder.RANDOM);
+    boolean isDirAll = rpcServer.isPathAll("/mount/dir");
+    assertTrue(isDirAll);
+    testInvocation(isDirAll);
+  }
+
+  @Test
+  public void testInvocationHashOrder() throws Exception {
+    setupOrderMountPath(DestinationOrder.HASH);
+    boolean isDirAll = rpcServer.isPathAll("/mount/dir");
+    assertFalse(isDirAll);
+    testInvocation(isDirAll);
+  }
+
+  @Test
+  public void testInvocationLocalOrder() throws Exception {
+    setupOrderMountPath(DestinationOrder.LOCAL);
+    boolean isDirAll = rpcServer.isPathAll("/mount/dir");
+    assertFalse(isDirAll);
+    testInvocation(isDirAll);
+  }
+
+  /**
+   * Verifies the invocation of API's at directory level , file level and at
+   * mount level.
+   * @param dirAll if true assumes that the mount entry creates directory on all
+   *          locations.
+   * @throws IOException
+   */
+  private void testInvocation(boolean dirAll) throws IOException {
+    // Verify invocation on nested directory and file.
+    Path mountDir = new Path("/mount/dir/dir");
+    Path nameSpaceFile = new Path("/tmp/dir/file");
+    Path mountFile = new Path("/mount/dir/file");
+    Path mountEntry = new Path("/mount");
+    Path mountDest = new Path("/tmp");
+    Path nameSpaceDir = new Path("/tmp/dir/dir");
+    final String name = "user.a1";
+    final byte[] value = {0x31, 0x32, 0x33};
+    testDirectoryAndFileLevelInvocation(dirAll, mountDir, nameSpaceFile,
+        mountFile, nameSpaceDir, name, value);
+
+    // Verify invocation on non nested directory and file.
+    mountDir = new Path("/mount/dir");
+    nameSpaceFile = new Path("/tmp/file");
+    mountFile = new Path("/mount/file");
+    nameSpaceDir = new Path("/tmp/dir");
+    testDirectoryAndFileLevelInvocation(dirAll, mountDir, nameSpaceFile,
+        mountFile, nameSpaceDir, name, value);
+
+    // Check invocation directly for a mount point.
+    // Verify owner and permissions.
+    routerFs.setOwner(mountEntry, "testuser", "testgroup");
+    routerFs.setPermission(mountEntry,
+        FsPermission.createImmutable((short) 777));
+    assertEquals("testuser", routerFs.getFileStatus(mountEntry).getOwner());
+    assertEquals("testuser", nnFs0.getFileStatus(mountDest).getOwner());
+    assertEquals("testuser", nnFs1.getFileStatus(mountDest).getOwner());
+    assertEquals((short) 777,
+        routerFs.getFileStatus(mountEntry).getPermission().toShort());
+    assertEquals((short) 777,
+        nnFs0.getFileStatus(mountDest).getPermission().toShort());
+    assertEquals((short) 777,
+        nnFs1.getFileStatus(mountDest).getPermission().toShort());
+
+    //Verify storage policy.
+    routerFs.setStoragePolicy(mountEntry, "COLD");
+    assertEquals("COLD", routerFs.getStoragePolicy(mountEntry).getName());
+    assertEquals("COLD", nnFs0.getStoragePolicy(mountDest).getName());
+    assertEquals("COLD", nnFs1.getStoragePolicy(mountDest).getName());
+    routerFs.unsetStoragePolicy(mountEntry);
+    assertEquals("HOT", routerFs.getStoragePolicy(mountDest).getName());
+    assertEquals("HOT", nnFs0.getStoragePolicy(mountDest).getName());
+    assertEquals("HOT", nnFs1.getStoragePolicy(mountDest).getName());
+
+    //Verify erasure coding policy.
+    routerFs.setErasureCodingPolicy(mountEntry, "RS-6-3-1024k");
+    assertEquals("RS-6-3-1024k",
+        routerFs.getErasureCodingPolicy(mountEntry).getName());
+    assertEquals("RS-6-3-1024k",
+        nnFs0.getErasureCodingPolicy(mountDest).getName());
+    assertEquals("RS-6-3-1024k",
+        nnFs1.getErasureCodingPolicy(mountDest).getName());
+    routerFs.unsetErasureCodingPolicy(mountEntry);
+    assertNull(routerFs.getErasureCodingPolicy(mountDest));
+    assertNull(nnFs0.getErasureCodingPolicy(mountDest));
+    assertNull(nnFs1.getErasureCodingPolicy(mountDest));
+
+    //Verify xAttr.
+    routerFs.setXAttr(mountEntry, name, value);
+    assertArrayEquals(value, routerFs.getXAttr(mountEntry, name));
+    assertArrayEquals(value, nnFs0.getXAttr(mountDest, name));
+    assertArrayEquals(value, nnFs1.getXAttr(mountDest, name));
+    routerFs.removeXAttr(mountEntry, name);
+    assertEquals(0, routerFs.getXAttrs(mountEntry).size());
+    assertEquals(0, nnFs0.getXAttrs(mountDest).size());
+    assertEquals(0, nnFs1.getXAttrs(mountDest).size());
+  }
+
+  /**
+   * SetUp to verify invocations on directories and file.
+   */
+  private void testDirectoryAndFileLevelInvocation(boolean dirAll,
+      Path mountDir, Path nameSpaceFile, Path mountFile, Path nameSpaceDir,
+      final String name, final byte[] value) throws IOException {
+    // Check invocation for a directory.
+    routerFs.setOwner(mountDir, "testuser", "testgroup");
+    routerFs.setPermission(mountDir, FsPermission.createImmutable((short) 777));
+    routerFs.setStoragePolicy(mountDir, "COLD");
+    routerFs.setErasureCodingPolicy(mountDir, "RS-6-3-1024k");
+    routerFs.setXAttr(mountDir, name, value);
+
+    // Verify the directory level invocations were checked in case of mounts not
+    // creating directories in all subclusters.
+    boolean checkedDir1 = verifyDirectoryLevelInvocations(dirAll, nameSpaceDir,
+        nnFs0, name, value);
+    boolean checkedDir2 = verifyDirectoryLevelInvocations(dirAll, nameSpaceDir,
+        nnFs1, name, value);
+    assertTrue("The file didn't existed in either of the subclusters.",
+        checkedDir1 || checkedDir2);
+    routerFs.unsetStoragePolicy(mountDir);
+    routerFs.removeXAttr(mountDir, name);
+    routerFs.unsetErasureCodingPolicy(mountDir);
+
+    checkedDir1 =
+        verifyDirectoryLevelUnsetInvocations(dirAll, nnFs0, nameSpaceDir);
+    checkedDir2 =
+        verifyDirectoryLevelUnsetInvocations(dirAll, nnFs1, nameSpaceDir);
+    assertTrue("The file didn't existed in either of the subclusters.",
+        checkedDir1 || checkedDir2);
+
+    // Check invocation for a file.
+    routerFs.setOwner(mountFile, "testuser", "testgroup");
+    routerFs.setPermission(mountFile,
+        FsPermission.createImmutable((short) 777));
+    routerFs.setStoragePolicy(mountFile, "COLD");
+    routerFs.setReplication(mountFile, (short) 2);
+    routerFs.setXAttr(mountFile, name, value);
+    verifyFileLevelInvocations(nameSpaceFile, nnFs0, mountFile, name, value);
+    verifyFileLevelInvocations(nameSpaceFile, nnFs1, mountFile, name, value);
+  }
+
+  /**
+   * Verify invocations of API's unseting values at the directory level.
+   * @param dirAll true if the mount entry order creates directory in all
+   *          locations.
+   * @param nameSpaceDir path of the directory in the namespace.
+   * @param nnFs file system where the directory level invocation needs to be
+   *          tested.
+   * @throws IOException
+   */
+  private boolean verifyDirectoryLevelUnsetInvocations(boolean dirAll,
+      DistributedFileSystem nnFs, Path nameSpaceDir) throws IOException {
+    boolean checked = false;
+    if (dirAll || nnFs.exists(nameSpaceDir)) {
+      checked = true;
+      assertEquals("HOT", nnFs.getStoragePolicy(nameSpaceDir).getName());
+      assertNull(nnFs.getErasureCodingPolicy(nameSpaceDir));
+      assertEquals(0, nnFs.getXAttrs(nameSpaceDir).size());
+    }
+    return checked;
+  }
+
+  /**
+   * Verify file level invocations.
+   * @param nameSpaceFile path of the file in the namespace.
+   * @param nnFs the file system where the file invocation needs to checked.
+   * @param mountFile path of the file w.r.t. mount table.
+   * @param name name of Xattr.
+   * @param value value of Xattr.
+   * @throws IOException
+   */
+  private void verifyFileLevelInvocations(Path nameSpaceFile,
+      DistributedFileSystem nnFs, Path mountFile, final String name,
+      final byte[] value) throws IOException {
+    if (nnFs.exists(nameSpaceFile)) {
+      assertEquals("testuser", nnFs.getFileStatus(nameSpaceFile).getOwner());
+      assertEquals((short) 777,
+          nnFs.getFileStatus(nameSpaceFile).getPermission().toShort());
+      assertEquals("COLD", nnFs.getStoragePolicy(nameSpaceFile).getName());
+      assertEquals((short) 2,
+          nnFs.getFileStatus(nameSpaceFile).getReplication());
+      assertArrayEquals(value, nnFs.getXAttr(nameSpaceFile, name));
+
+      routerFs.unsetStoragePolicy(mountFile);
+      routerFs.removeXAttr(mountFile, name);
+      assertEquals(0, nnFs.getXAttrs(nameSpaceFile).size());
+
+      assertEquals("HOT", nnFs.getStoragePolicy(nameSpaceFile).getName());
+
+    }
+  }
+
+  /**
+   * Verify invocations at the directory level.
+   * @param dirAll true if the mount entry order creates directory in all
+   *          locations.
+   * @param nameSpaceDir path of the directory in the namespace.
+   * @param nnFs file system where the directory level invocation needs to be
+   *          tested.
+   * @param name name for the Xattr.
+   * @param value value for the Xattr.
+   * @return true, if directory existed and successful verification of
+   *         invocations.
+   * @throws IOException
+   */
+  private boolean verifyDirectoryLevelInvocations(boolean dirAll,
+      Path nameSpaceDir, DistributedFileSystem nnFs, final String name,
+      final byte[] value) throws IOException {
+    boolean checked = false;
+    if (dirAll || nnFs.exists(nameSpaceDir)) {
+      checked = true;
+      assertEquals("testuser", nnFs.getFileStatus(nameSpaceDir).getOwner());
+      assertEquals("COLD", nnFs.getStoragePolicy(nameSpaceDir).getName());
+      assertEquals("RS-6-3-1024k",
+          nnFs.getErasureCodingPolicy(nameSpaceDir).getName());
+      assertArrayEquals(value, nnFs.getXAttr(nameSpaceDir, name));
+      assertEquals((short) 777,
+          nnFs.getFileStatus(nameSpaceDir).getPermission().toShort());
+    }
+    return checked;
+  }
+
+  /**
+   * Add a mount table entry to the mount table through the admin API.
+   * @param entry Mount table entry to add.
+   * @return If it was successfully added.
+   * @throws IOException + * Problems adding entries.
+   */
+  private boolean addMountTable(final MountTable entry) throws IOException {
+    RouterClient client = routerContext.getAdminClient();
+    MountTableManager mountTableManager = client.getMountTableManager();
+    AddMountTableEntryRequest addRequest =
+        AddMountTableEntryRequest.newInstance(entry);
+    AddMountTableEntryResponse addResponse =
+        mountTableManager.addMountTableEntry(addRequest);
+
+    // Reload the Router cache
+    resolver.loadCache(true);
+
+    return addResponse.getStatus();
+  }
+}
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 22/41: HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c74f7e12ac1a77e07920e89c4ec8fef93a42885d
Author: Yiqun Lin <yq...@apache.org>
AuthorDate: Wed Jan 9 17:18:43 2019 +0800

    HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.
---
 .../federation/router/RouterAdminServer.java       | 23 +++++++----
 .../src/main/resources/hdfs-rbf-default.xml        |  4 +-
 .../src/site/markdown/HDFSRouterFederation.md      |  4 +-
 .../server/federation/router/TestRouterQuota.java  | 48 +++++++++++++++++++++-
 4 files changed, 67 insertions(+), 12 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 5bb7751..18c19e0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -250,23 +250,25 @@ public class RouterAdminServer extends AbstractService
 
     MountTable mountTable = request.getEntry();
     if (mountTable != null && router.isQuotaEnabled()) {
-      synchronizeQuota(mountTable);
+      synchronizeQuota(mountTable.getSourcePath(),
+          mountTable.getQuota().getQuota(),
+          mountTable.getQuota().getSpaceQuota());
     }
     return response;
   }
 
   /**
    * Synchronize the quota value across mount table and subclusters.
-   * @param mountTable Quota set in given mount table.
+   * @param path Source path in given mount table.
+   * @param nsQuota Name quota definition in given mount table.
+   * @param ssQuota Space quota definition in given mount table.
    * @throws IOException
    */
-  private void synchronizeQuota(MountTable mountTable) throws IOException {
-    String path = mountTable.getSourcePath();
-    long nsQuota = mountTable.getQuota().getQuota();
-    long ssQuota = mountTable.getQuota().getSpaceQuota();
-
-    if (nsQuota != HdfsConstants.QUOTA_DONT_SET
-        || ssQuota != HdfsConstants.QUOTA_DONT_SET) {
+  private void synchronizeQuota(String path, long nsQuota, long ssQuota)
+      throws IOException {
+    if (router.isQuotaEnabled() &&
+        (nsQuota != HdfsConstants.QUOTA_DONT_SET
+        || ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
       HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
       if (ret != null) {
         this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
@@ -278,6 +280,9 @@ public class RouterAdminServer extends AbstractService
   @Override
   public RemoveMountTableEntryResponse removeMountTableEntry(
       RemoveMountTableEntryRequest request) throws IOException {
+    // clear sub-cluster's quota definition
+    synchronizeQuota(request.getSrcPath(), HdfsConstants.QUOTA_RESET,
+        HdfsConstants.QUOTA_RESET);
     return getMountTableStore().removeMountTableEntry(request);
   }
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 72f6c2f..20ae778 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -447,7 +447,9 @@
     <name>dfs.federation.router.quota.enable</name>
     <value>false</value>
     <description>
-      Set to true to enable quota system in Router.
+      Set to true to enable quota system in Router. When it's enabled, setting
+      or clearing sub-cluster's quota directly is not recommended since Router
+      Admin server will override sub-cluster's quota with global quota.
     </description>
   </property>
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index adc4383..959cd63 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -143,6 +143,8 @@ For performance reasons, the Router caches the quota usage and updates it period
 will be used for quota-verification during each WRITE RPC call invoked in RouterRPCSever. See [HDFS Quotas Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html)
 for the quota detail.
 
+Note: When global quota is enabled, setting or clearing sub-cluster's quota directly is not recommended since Router Admin server will override sub-cluster's quota with global quota.
+
 ### State Store
 The (logically centralized, but physically distributed) State Store maintains:
 
@@ -421,7 +423,7 @@ Global quota supported in federation.
 
 | Property | Default | Description|
 |:---- |:---- |:---- |
-| dfs.federation.router.quota.enable | `false` | If `true`, the quota system enabled in the Router. |
+| dfs.federation.router.quota.enable | `false` | If `true`, the quota system enabled in the Router. In that case, setting or clearing sub-cluster's quota directly is not recommended since Router Admin server will override sub-cluster's quota with global quota.|
 | dfs.federation.router.quota-cache.update.interval | 60s | How often the Router updates quota cache. This setting supports multiple time unit suffixes. If no suffix is specified then milliseconds is assumed. |
 
 Metrics
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
index 6a29446..656b401 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
@@ -605,7 +605,7 @@ public class TestRouterQuota {
   @Test
   public void testQuotaRefreshWhenDestinationNotPresent() throws Exception {
     long nsQuota = 5;
-    long ssQuota = 3*BLOCK_SIZE;
+    long ssQuota = 3 * BLOCK_SIZE;
     final FileSystem nnFs = nnContext1.getFileSystem();
 
     // Add three mount tables:
@@ -709,4 +709,50 @@ public class TestRouterQuota {
     assertEquals(updatedSpace, cacheQuota2.getSpaceConsumed());
     assertEquals(updatedSpace, mountQuota2.getSpaceConsumed());
   }
+
+  @Test
+  public void testClearQuotaDefAfterRemovingMountTable() throws Exception {
+    long nsQuota = 5;
+    long ssQuota = 3 * BLOCK_SIZE;
+    final FileSystem nnFs = nnContext1.getFileSystem();
+
+    // Add one mount tables:
+    // /setdir --> ns0---testdir15
+    // Create destination directory
+    nnFs.mkdirs(new Path("/testdir15"));
+
+    MountTable mountTable = MountTable.newInstance("/setdir",
+        Collections.singletonMap("ns0", "/testdir15"));
+    mountTable.setQuota(new RouterQuotaUsage.Builder().quota(nsQuota)
+        .spaceQuota(ssQuota).build());
+    addMountTable(mountTable);
+
+    // Update router quota
+    RouterQuotaUpdateService updateService =
+        routerContext.getRouter().getQuotaCacheUpdateService();
+    updateService.periodicInvoke();
+
+    RouterQuotaManager quotaManager =
+        routerContext.getRouter().getQuotaManager();
+    ClientProtocol client = nnContext1.getClient().getNamenode();
+    QuotaUsage routerQuota = quotaManager.getQuotaUsage("/setdir");
+    QuotaUsage subClusterQuota = client.getQuotaUsage("/testdir15");
+
+    // Verify current quota definitions
+    assertEquals(nsQuota, routerQuota.getQuota());
+    assertEquals(ssQuota, routerQuota.getSpaceQuota());
+    assertEquals(nsQuota, subClusterQuota.getQuota());
+    assertEquals(ssQuota, subClusterQuota.getSpaceQuota());
+
+    // Remove mount table
+    removeMountTable("/setdir");
+    updateService.periodicInvoke();
+    routerQuota = quotaManager.getQuotaUsage("/setdir");
+    subClusterQuota = client.getQuotaUsage("/testdir15");
+
+    // Verify quota definitions are cleared after removing the mount table
+    assertNull(routerQuota);
+    assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getQuota());
+    assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getSpaceQuota());
+  }
 }
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 03/41: HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f61a816d35863f524d5d5885f9b8a4dd17daeb77
Author: Brahma Reddy Battula <br...@apache.org>
AuthorDate: Tue Oct 30 11:21:08 2018 +0530

    HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng.
---
 .../federation/resolver/MountTableResolver.java    | 15 ++++++--
 .../resolver/TestMountTableResolver.java           | 45 +++++++++++++++++-----
 .../federation/router/TestDisableNameservices.java | 36 ++++++++++-------
 3 files changed, 70 insertions(+), 26 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 121469f..9e69840 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -539,21 +539,28 @@ public class MountTableResolver
    * @param entry Mount table entry.
    * @return PathLocation containing the namespace, local path.
    */
-  private static PathLocation buildLocation(
-      final String path, final MountTable entry) {
-
+  private PathLocation buildLocation(
+      final String path, final MountTable entry) throws IOException {
     String srcPath = entry.getSourcePath();
     if (!path.startsWith(srcPath)) {
       LOG.error("Cannot build location, {} not a child of {}", path, srcPath);
       return null;
     }
+
+    List<RemoteLocation> dests = entry.getDestinations();
+    if (getClass() == MountTableResolver.class && dests.size() > 1) {
+      throw new IOException("Cannnot build location, "
+          + getClass().getSimpleName()
+          + " should not resolve multiple destinations for " + path);
+    }
+
     String remainingPath = path.substring(srcPath.length());
     if (remainingPath.startsWith(Path.SEPARATOR)) {
       remainingPath = remainingPath.substring(1);
     }
 
     List<RemoteLocation> locations = new LinkedList<>();
-    for (RemoteLocation oneDst : entry.getDestinations()) {
+    for (RemoteLocation oneDst : dests) {
       String nsId = oneDst.getNameserviceId();
       String dest = oneDst.getDest();
       String newPath = dest;
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index 5e3b861..14ccb61 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -79,6 +79,8 @@ public class TestMountTableResolver {
    * __usr
    * ____bin -> 2:/bin
    * __readonly -> 2:/tmp
+   * __multi -> 5:/dest1
+   *            6:/dest2
    *
    * @throws IOException If it cannot set the mount table.
    */
@@ -126,6 +128,12 @@ public class TestMountTableResolver {
     MountTable readOnlyEntry = MountTable.newInstance("/readonly", map);
     readOnlyEntry.setReadOnly(true);
     mountTable.addEntry(readOnlyEntry);
+
+    // /multi
+    map = getMountTableEntry("5", "/dest1");
+    map.put("6", "/dest2");
+    MountTable multiEntry = MountTable.newInstance("/multi", map);
+    mountTable.addEntry(multiEntry);
   }
 
   @Before
@@ -201,6 +209,17 @@ public class TestMountTableResolver {
     }
   }
 
+  @Test
+  public void testMuiltipleDestinations() throws IOException {
+    try {
+      mountTable.getDestinationForPath("/multi");
+      fail("The getDestinationForPath call should fail.");
+    } catch (IOException ioe) {
+      GenericTestUtils.assertExceptionContains(
+          "MountTableResolver should not resolve multiple destinations", ioe);
+    }
+  }
+
   private void compareLists(List<String> list1, String[] list2) {
     assertEquals(list1.size(), list2.length);
     for (String item : list2) {
@@ -236,8 +255,9 @@ public class TestMountTableResolver {
 
     // Check getting all mount points (virtual and real) beneath a path
     List<String> mounts = mountTable.getMountPoints("/");
-    assertEquals(4, mounts.size());
-    compareLists(mounts, new String[] {"tmp", "user", "usr", "readonly"});
+    assertEquals(5, mounts.size());
+    compareLists(mounts, new String[] {"tmp", "user", "usr",
+        "readonly", "multi"});
 
     mounts = mountTable.getMountPoints("/user");
     assertEquals(2, mounts.size());
@@ -263,6 +283,9 @@ public class TestMountTableResolver {
 
     mounts = mountTable.getMountPoints("/unknownpath");
     assertNull(mounts);
+
+    mounts = mountTable.getMountPoints("/multi");
+    assertEquals(0, mounts.size());
   }
 
   private void compareRecords(List<MountTable> list1, String[] list2) {
@@ -282,10 +305,10 @@ public class TestMountTableResolver {
 
     // Check listing the mount table records at or beneath a path
     List<MountTable> records = mountTable.getMounts("/");
-    assertEquals(9, records.size());
+    assertEquals(10, records.size());
     compareRecords(records, new String[] {"/", "/tmp", "/user", "/usr/bin",
         "user/a", "/user/a/demo/a", "/user/a/demo/b", "/user/b/file1.txt",
-        "readonly"});
+        "readonly", "multi"});
 
     records = mountTable.getMounts("/user");
     assertEquals(5, records.size());
@@ -305,6 +328,10 @@ public class TestMountTableResolver {
     assertEquals(1, records.size());
     compareRecords(records, new String[] {"/readonly"});
     assertTrue(records.get(0).isReadOnly());
+
+    records = mountTable.getMounts("/multi");
+    assertEquals(1, records.size());
+    compareRecords(records, new String[] {"/multi"});
   }
 
   @Test
@@ -313,7 +340,7 @@ public class TestMountTableResolver {
 
     // 3 mount points are present /tmp, /user, /usr
     compareLists(mountTable.getMountPoints("/"),
-        new String[] {"user", "usr", "tmp", "readonly"});
+        new String[] {"user", "usr", "tmp", "readonly", "multi"});
 
     // /tmp currently points to namespace 2
     assertEquals("2", mountTable.getDestinationForPath("/tmp/testfile.txt")
@@ -324,7 +351,7 @@ public class TestMountTableResolver {
 
     // Now 2 mount points are present /user, /usr
     compareLists(mountTable.getMountPoints("/"),
-        new String[] {"user", "usr", "readonly"});
+        new String[] {"user", "usr", "readonly", "multi"});
 
     // /tmp no longer exists, uses default namespace for mapping /
     assertEquals("1", mountTable.getDestinationForPath("/tmp/testfile.txt")
@@ -337,7 +364,7 @@ public class TestMountTableResolver {
 
     // 3 mount points are present /tmp, /user, /usr
     compareLists(mountTable.getMountPoints("/"),
-        new String[] {"user", "usr", "tmp", "readonly"});
+        new String[] {"user", "usr", "tmp", "readonly", "multi"});
 
     // /usr is virtual, uses namespace 1->/
     assertEquals("1", mountTable.getDestinationForPath("/usr/testfile.txt")
@@ -348,7 +375,7 @@ public class TestMountTableResolver {
 
     // Verify the remove failed
     compareLists(mountTable.getMountPoints("/"),
-        new String[] {"user", "usr", "tmp", "readonly"});
+        new String[] {"user", "usr", "tmp", "readonly", "multi"});
   }
 
   @Test
@@ -380,7 +407,7 @@ public class TestMountTableResolver {
 
     // Initial table loaded
     testDestination();
-    assertEquals(9, mountTable.getMounts("/").size());
+    assertEquals(10, mountTable.getMounts("/").size());
 
     // Replace table with /1 and /2
     List<MountTable> records = new ArrayList<>();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java
index 15b104d..610927d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestDisableNameservices.java
@@ -21,6 +21,7 @@ import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.simul
 import static org.apache.hadoop.util.Time.monotonicNow;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.util.Iterator;
@@ -43,13 +44,13 @@ import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
 import org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
-import org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.store.DisabledNameserviceStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.DisableNameserviceRequest;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.codehaus.jettison.json.JSONObject;
 import org.junit.After;
 import org.junit.AfterClass;
@@ -106,14 +107,18 @@ public class TestDisableNameservices {
     // Setup a mount table to map to the two namespaces
     MountTableManager mountTable = routerAdminClient.getMountTableManager();
     Map<String, String> destinations = new TreeMap<>();
-    destinations.put("ns0", "/");
-    destinations.put("ns1", "/");
-    MountTable newEntry = MountTable.newInstance("/", destinations);
-    newEntry.setDestOrder(DestinationOrder.RANDOM);
+    destinations.put("ns0", "/dirns0");
+    MountTable newEntry = MountTable.newInstance("/dirns0", destinations);
     AddMountTableEntryRequest request =
         AddMountTableEntryRequest.newInstance(newEntry);
     mountTable.addMountTableEntry(request);
 
+    destinations = new TreeMap<>();
+    destinations.put("ns1", "/dirns1");
+    newEntry = MountTable.newInstance("/dirns1", destinations);
+    request = AddMountTableEntryRequest.newInstance(newEntry);
+    mountTable.addMountTableEntry(request);
+
     // Refresh the cache in the Router
     Router router = routerContext.getRouter();
     MountTableResolver mountTableResolver =
@@ -122,9 +127,9 @@ public class TestDisableNameservices {
 
     // Add a folder to each namespace
     NamenodeContext nn0 = cluster.getNamenode("ns0", null);
-    nn0.getFileSystem().mkdirs(new Path("/dirns0"));
+    nn0.getFileSystem().mkdirs(new Path("/dirns0/0"));
     NamenodeContext nn1 = cluster.getNamenode("ns1", null);
-    nn1.getFileSystem().mkdirs(new Path("/dirns1"));
+    nn1.getFileSystem().mkdirs(new Path("/dirns1/1"));
   }
 
   @AfterClass
@@ -153,14 +158,12 @@ public class TestDisableNameservices {
 
   @Test
   public void testWithoutDisabling() throws IOException {
-
     // ns0 is slow and renewLease should take a long time
     long t0 = monotonicNow();
     routerProtocol.renewLease("client0");
     long t = monotonicNow() - t0;
     assertTrue("It took too little: " + t + "ms",
         t > TimeUnit.SECONDS.toMillis(1));
-
     // Return the results from all subclusters even if slow
     FileSystem routerFs = routerContext.getFileSystem();
     FileStatus[] filesStatus = routerFs.listStatus(new Path("/"));
@@ -171,7 +174,6 @@ public class TestDisableNameservices {
 
   @Test
   public void testDisabling() throws Exception {
-
     disableNameservice("ns0");
 
     // renewLease should be fast as we are skipping ns0
@@ -180,12 +182,20 @@ public class TestDisableNameservices {
     long t = monotonicNow() - t0;
     assertTrue("It took too long: " + t + "ms",
         t < TimeUnit.SECONDS.toMillis(1));
-
     // We should not report anything from ns0
     FileSystem routerFs = routerContext.getFileSystem();
-    FileStatus[] filesStatus = routerFs.listStatus(new Path("/"));
+    FileStatus[] filesStatus = null;
+    try {
+      routerFs.listStatus(new Path("/"));
+      fail("The listStatus call should fail.");
+    } catch (IOException ioe) {
+      GenericTestUtils.assertExceptionContains(
+          "No remote locations available", ioe);
+    }
+
+    filesStatus = routerFs.listStatus(new Path("/dirns1"));
     assertEquals(1, filesStatus.length);
-    assertEquals("dirns1", filesStatus[0].getPath().getName());
+    assertEquals("1", filesStatus[0].getPath().getName());
   }
 
   @Test


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 34/41: HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ec52346bbe675adab122f7a4d5ace14747f5d32c
Author: Takanobu Asanuma <ta...@apache.org>
AuthorDate: Tue Feb 5 06:06:05 2019 +0900

    HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
---
 .../org/apache/hadoop/fs/contract/AbstractContractAppendTest.java   | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
index d61b635..02a8996 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
@@ -133,6 +133,12 @@ public abstract class AbstractContractAppendTest extends AbstractFSContractTestB
     assertPathExists("original file does not exist", target);
     byte[] dataset = dataset(256, 'a', 'z');
     FSDataOutputStream outputStream = getFileSystem().append(target);
+    if (isSupported(CREATE_VISIBILITY_DELAYED)) {
+      // Some filesystems like WebHDFS doesn't assure sequential consistency.
+      // In such a case, delay is needed. Given that we can not check the lease
+      // because here is closed in client side package, simply add a sleep.
+      Thread.sleep(10);
+    }
     outputStream.write(dataset);
     Path renamed = new Path(testPath, "renamed");
     rename(target, renamed);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 29/41: HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 11210e76b2bc4538cd2d8432493b46050f809f69
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Sat Jan 19 15:23:15 2019 -0800

    HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan.
---
 .../federation/router/RouterClientProtocol.java    |   2 +-
 .../server/federation/router/RouterRpcClient.java  |   4 +-
 .../server/federation/router/TestRouterRpc.java    |  27 +++
 .../federation/router/TestRouterRpcSingleNS.java   | 211 +++++++++++++++++++++
 4 files changed, 241 insertions(+), 3 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index c41959e..09f7e5f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -869,7 +869,7 @@ public class RouterClientProtocol implements ClientProtocol {
     rpcServer.checkOperation(NameNode.OperationCategory.UNCHECKED);
 
     RemoteMethod method = new RemoteMethod("saveNamespace",
-        new Class<?>[] {Long.class, Long.class}, timeWindow, txGap);
+        new Class<?>[] {long.class, long.class}, timeWindow, txGap);
     final Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
     Map<FederationNamespaceInfo, Boolean> ret =
         rpcClient.invokeConcurrent(nss, method, true, false, boolean.class);
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index c4d3a20..0b15333 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -1045,7 +1045,7 @@ public class RouterRpcClient {
       Class<?> proto = method.getProtocol();
       Object[] paramList = method.getParams(location);
       Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
-      return Collections.singletonMap(location, clazz.cast(result));
+      return Collections.singletonMap(location, (R) result);
     }
 
     List<T> orderedLocations = new LinkedList<>();
@@ -1103,7 +1103,7 @@ public class RouterRpcClient {
         try {
           Future<Object> future = futures.get(i);
           Object result = future.get();
-          results.put(location, clazz.cast(result));
+          results.put(location, (R) result);
         } catch (CancellationException ce) {
           T loc = orderedLocations.get(i);
           String msg = "Invocation to \"" + loc + "\" for \""
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index 8632203..760d755 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -111,6 +111,8 @@ import com.google.common.collect.Maps;
 /**
  * The the RPC interface of the {@link Router} implemented by
  * {@link RouterRpcServer}.
+ * Tests covering the functionality of RouterRPCServer with
+ * multi nameServices.
  */
 public class TestRouterRpc {
 
@@ -1256,6 +1258,31 @@ public class TestRouterRpc {
   }
 
   @Test
+  public void testGetCurrentTXIDandRollEdits() throws IOException {
+    Long rollEdits = routerProtocol.rollEdits();
+    Long currentTXID = routerProtocol.getCurrentEditLogTxid();
+
+    assertEquals(rollEdits, currentTXID);
+  }
+
+  @Test
+  public void testSaveNamespace() throws IOException {
+    cluster.getCluster().getFileSystem(0)
+        .setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_ENTER);
+    cluster.getCluster().getFileSystem(1)
+        .setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_ENTER);
+
+    Boolean saveNamespace = routerProtocol.saveNamespace(0, 0);
+
+    assertTrue(saveNamespace);
+
+    cluster.getCluster().getFileSystem(0)
+        .setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_LEAVE);
+    cluster.getCluster().getFileSystem(1)
+        .setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_LEAVE);
+  }
+
+  @Test
   public void testNamenodeMetrics() throws Exception {
     final NamenodeBeanMetrics metrics =
         router.getRouter().getNamenodeMetrics();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcSingleNS.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcSingleNS.java
new file mode 100644
index 0000000..ae0afa4
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcSingleNS.java
@@ -0,0 +1,211 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdfs.NameNodeProxies;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.util.Random;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createFile;
+import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.verifyFileExists;
+
+/**
+ * The the RPC interface of the {@link Router} implemented by
+ * {@link RouterRpcServer}.
+ * Tests covering the functionality of RouterRPCServer with
+ * single nameService.
+ */
+public class TestRouterRpcSingleNS {
+
+  /**
+   * Federated HDFS cluster.
+   */
+  private static MiniRouterDFSCluster cluster;
+
+  /**
+   * Random Router for this federated cluster.
+   */
+  private MiniRouterDFSCluster.RouterContext router;
+
+  /**
+   * Random nameservice in the federated cluster.
+   */
+  private String ns;
+  /**
+   * First namenode in the nameservice.
+   */
+  private MiniRouterDFSCluster.NamenodeContext namenode;
+
+  /**
+   * Client interface to the Router.
+   */
+  private ClientProtocol routerProtocol;
+  /**
+   * Client interface to the Namenode.
+   */
+  private ClientProtocol nnProtocol;
+
+  /**
+   * NameNodeProtocol interface to the Router.
+   */
+  private NamenodeProtocol routerNamenodeProtocol;
+  /**
+   * NameNodeProtocol interface to the Namenode.
+   */
+  private NamenodeProtocol nnNamenodeProtocol;
+
+  /**
+   * Filesystem interface to the Router.
+   */
+  private FileSystem routerFS;
+  /**
+   * Filesystem interface to the Namenode.
+   */
+  private FileSystem nnFS;
+
+  /**
+   * File in the Router.
+   */
+  private String routerFile;
+  /**
+   * File in the Namenode.
+   */
+  private String nnFile;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+    cluster = new MiniRouterDFSCluster(false, 1);
+    cluster.setNumDatanodesPerNameservice(2);
+
+    // Start NNs and DNs and wait until ready
+    cluster.startCluster();
+
+    // Start routers with only an RPC service
+    Configuration routerConf = new RouterConfigBuilder().metrics().rpc()
+        .build();
+    // We decrease the DN cache times to make the test faster
+    routerConf.setTimeDuration(RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1,
+        TimeUnit.SECONDS);
+    cluster.addRouterOverrides(routerConf);
+    cluster.startRouters();
+
+    // Register and verify all NNs with all routers
+    cluster.registerNamenodes();
+    cluster.waitNamenodeRegistration();
+  }
+
+  @AfterClass
+  public static void tearDown() {
+    cluster.shutdown();
+  }
+
+  @Before
+  public void testSetup() throws Exception {
+
+    // Create mock locations
+    cluster.installMockLocations();
+
+    // Delete all files via the NNs and verify
+    cluster.deleteAllFiles();
+
+    // Create test fixtures on NN
+    cluster.createTestDirectoriesNamenode();
+
+    // Wait to ensure NN has fully created its test directories
+    Thread.sleep(100);
+
+    // Random router for this test
+    MiniRouterDFSCluster.RouterContext rndRouter = cluster.getRandomRouter();
+    this.setRouter(rndRouter);
+
+    // Pick a namenode for this test
+    String ns0 = cluster.getNameservices().get(0);
+    this.setNs(ns0);
+    this.setNamenode(cluster.getNamenode(ns0, null));
+
+    // Create a test file on the NN
+    Random rnd = new Random();
+    String randomFile = "testfile-" + rnd.nextInt();
+    this.nnFile = cluster.getNamenodeTestDirectoryForNS(ns) + "/" + randomFile;
+    this.routerFile = cluster.getFederatedTestDirectoryForNS(ns) + "/"
+        + randomFile;
+
+    createFile(nnFS, nnFile, 32);
+    verifyFileExists(nnFS, nnFile);
+  }
+
+  protected void setRouter(MiniRouterDFSCluster.RouterContext r)
+      throws IOException, URISyntaxException {
+    this.router = r;
+    this.routerProtocol = r.getClient().getNamenode();
+    this.routerFS = r.getFileSystem();
+    this.routerNamenodeProtocol = NameNodeProxies.createProxy(router.getConf(),
+        router.getFileSystem().getUri(), NamenodeProtocol.class).getProxy();
+  }
+
+  protected void setNs(String nameservice) {
+    this.ns = nameservice;
+  }
+
+  protected void setNamenode(MiniRouterDFSCluster.NamenodeContext nn)
+      throws IOException, URISyntaxException {
+    this.namenode = nn;
+    this.nnProtocol = nn.getClient().getNamenode();
+    this.nnFS = nn.getFileSystem();
+
+    // Namenode from the default namespace
+    String ns0 = cluster.getNameservices().get(0);
+    MiniRouterDFSCluster.NamenodeContext nn0 = cluster.getNamenode(ns0, null);
+    this.nnNamenodeProtocol = NameNodeProxies.createProxy(nn0.getConf(),
+        nn0.getFileSystem().getUri(), NamenodeProtocol.class).getProxy();
+  }
+
+  @Test
+  public void testGetCurrentTXIDandRollEdits() throws IOException {
+    Long rollEdits = routerProtocol.rollEdits();
+    Long currentTXID = routerProtocol.getCurrentEditLogTxid();
+
+    assertEquals(rollEdits, currentTXID);
+  }
+
+  @Test
+  public void testSaveNamespace() throws IOException {
+    cluster.getCluster().getFileSystem()
+        .setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_ENTER);
+    Boolean saveNamespace = routerProtocol.saveNamespace(0, 0);
+
+    assertTrue(saveNamespace);
+  }
+}
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 33/41: HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 71f20661fc51ecae1103a8e9b35b254a672fd419
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Mon Jan 28 10:04:24 2019 -0800

    HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena.
---
 .../federation/router/RouterClientProtocol.java    |   3 +-
 .../federation/router/RouterNamenodeProtocol.java  |  20 +---
 .../server/federation/router/RouterRpcServer.java  |  23 +++++
 .../federation/router/RouterStoragePolicy.java     |   7 +-
 .../hdfs/server/federation/MockResolver.java       |  12 +++
 .../server/federation/router/TestRouterRpc.java    | 109 ++++++++++++++++++---
 6 files changed, 139 insertions(+), 35 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 485c103..f20b4b6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -195,8 +195,7 @@ public class RouterClientProtocol implements ClientProtocol {
     rpcServer.checkOperation(NameNode.OperationCategory.READ);
 
     RemoteMethod method = new RemoteMethod("getServerDefaults");
-    String ns = subclusterResolver.getDefaultNamespace();
-    return (FsServerDefaults) rpcClient.invokeSingle(ns, method);
+    return rpcServer.invokeAtAvailableNs(method, FsServerDefaults.class);
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
index bf0db6e..c6b0209 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
@@ -24,7 +24,6 @@ import java.util.Map.Entry;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
-import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
 import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
 import org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations;
@@ -45,14 +44,11 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
   private final RouterRpcServer rpcServer;
   /** RPC clients to connect to the Namenodes. */
   private final RouterRpcClient rpcClient;
-  /** Interface to map global name space to HDFS subcluster name spaces. */
-  private final FileSubclusterResolver subclusterResolver;
 
 
   public RouterNamenodeProtocol(RouterRpcServer server) {
     this.rpcServer = server;
     this.rpcClient =  this.rpcServer.getRPCClient();
-    this.subclusterResolver = this.rpcServer.getSubclusterResolver();
   }
 
   @Override
@@ -94,33 +90,27 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
   public ExportedBlockKeys getBlockKeys() throws IOException {
     rpcServer.checkOperation(OperationCategory.READ);
 
-    // We return the information from the default name space
-    String defaultNsId = subclusterResolver.getDefaultNamespace();
     RemoteMethod method =
         new RemoteMethod(NamenodeProtocol.class, "getBlockKeys");
-    return rpcClient.invokeSingle(defaultNsId, method, ExportedBlockKeys.class);
+    return rpcServer.invokeAtAvailableNs(method, ExportedBlockKeys.class);
   }
 
   @Override
   public long getTransactionID() throws IOException {
     rpcServer.checkOperation(OperationCategory.READ);
 
-    // We return the information from the default name space
-    String defaultNsId = subclusterResolver.getDefaultNamespace();
     RemoteMethod method =
         new RemoteMethod(NamenodeProtocol.class, "getTransactionID");
-    return rpcClient.invokeSingle(defaultNsId, method, long.class);
+    return rpcServer.invokeAtAvailableNs(method, long.class);
   }
 
   @Override
   public long getMostRecentCheckpointTxId() throws IOException {
     rpcServer.checkOperation(OperationCategory.READ);
 
-    // We return the information from the default name space
-    String defaultNsId = subclusterResolver.getDefaultNamespace();
     RemoteMethod method =
         new RemoteMethod(NamenodeProtocol.class, "getMostRecentCheckpointTxId");
-    return rpcClient.invokeSingle(defaultNsId, method, long.class);
+    return rpcServer.invokeAtAvailableNs(method, long.class);
   }
 
   @Override
@@ -133,11 +123,9 @@ public class RouterNamenodeProtocol implements NamenodeProtocol {
   public NamespaceInfo versionRequest() throws IOException {
     rpcServer.checkOperation(OperationCategory.READ);
 
-    // We return the information from the default name space
-    String defaultNsId = subclusterResolver.getDefaultNamespace();
     RemoteMethod method =
         new RemoteMethod(NamenodeProtocol.class, "versionRequest");
-    return rpcClient.invokeSingle(defaultNsId, method, NamespaceInfo.class);
+    return rpcServer.invokeAtAvailableNs(method, NamespaceInfo.class);
   }
 
   @Override
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 0d4f94c..be6a9b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -479,6 +479,29 @@ public class RouterRpcServer extends AbstractService
     return methodName;
   }
 
+  /**
+   * Invokes the method at default namespace, if default namespace is not
+   * available then at the first available namespace.
+   * @param <T> expected return type.
+   * @param method the remote method.
+   * @return the response received after invoking method.
+   * @throws IOException
+   */
+  <T> T invokeAtAvailableNs(RemoteMethod method, Class<T> clazz)
+      throws IOException {
+    String nsId = subclusterResolver.getDefaultNamespace();
+    if (!nsId.isEmpty()) {
+      return rpcClient.invokeSingle(nsId, method, clazz);
+    }
+    // If default Ns is not present return result from first namespace.
+    Set<FederationNamespaceInfo> nss = namenodeResolver.getNamespaces();
+    if (nss.isEmpty()) {
+      throw new IOException("No namespace availaible.");
+    }
+    nsId = nss.iterator().next().getNameserviceId();
+    return rpcClient.invokeSingle(nsId, method, clazz);
+  }
+
   @Override // ClientProtocol
   public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
       throws IOException {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
index 7145940..8a55b9a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hdfs.server.federation.router;
 
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
-import org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 
@@ -36,13 +35,10 @@ public class RouterStoragePolicy {
   private final RouterRpcServer rpcServer;
   /** RPC clients to connect to the Namenodes. */
   private final RouterRpcClient rpcClient;
-  /** Interface to map global name space to HDFS subcluster name spaces. */
-  private final FileSubclusterResolver subclusterResolver;
 
   public RouterStoragePolicy(RouterRpcServer server) {
     this.rpcServer = server;
     this.rpcClient = this.rpcServer.getRPCClient();
-    this.subclusterResolver = this.rpcServer.getSubclusterResolver();
   }
 
   public void setStoragePolicy(String src, String policyName)
@@ -61,8 +57,7 @@ public class RouterStoragePolicy {
     rpcServer.checkOperation(NameNode.OperationCategory.READ);
 
     RemoteMethod method = new RemoteMethod("getStoragePolicies");
-    String ns = subclusterResolver.getDefaultNamespace();
-    return (BlockStoragePolicy[]) rpcClient.invokeSingle(ns, method);
+    return rpcServer.invokeAtAvailableNs(method, BlockStoragePolicy[].class);
   }
 
   public void unsetStoragePolicy(String src) throws IOException {
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
index 9bff007..cdeab46 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -57,6 +57,7 @@ public class MockResolver
   private Map<String, List<RemoteLocation>> locations = new HashMap<>();
   private Set<FederationNamespaceInfo> namespaces = new HashSet<>();
   private String defaultNamespace = null;
+  private boolean disableDefaultNamespace = false;
 
   public MockResolver() {
     this.cleanRegistrations();
@@ -322,8 +323,19 @@ public class MockResolver
   public void setRouterId(String router) {
   }
 
+  /**
+   * Mocks the availability of default namespace.
+   * @param b if true default namespace is unset.
+   */
+  public void setDisableNamespace(boolean b) {
+    this.disableDefaultNamespace = b;
+  }
+
   @Override
   public String getDefaultNamespace() {
+    if (disableDefaultNamespace) {
+      return "";
+    }
     return defaultNamespace;
   }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index 760d755..2d26e11 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -56,6 +56,7 @@ import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FileContext;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsServerDefaults;
 import org.apache.hadoop.fs.Options;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
@@ -829,6 +830,40 @@ public class TestRouterRpc {
   }
 
   @Test
+  public void testListStoragePolicies() throws IOException, URISyntaxException {
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      BlockStoragePolicy[] policies = namenode.getClient().getStoragePolicies();
+      assertArrayEquals(policies, routerProtocol.getStoragePolicies());
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      assertArrayEquals(policies, routerProtocol.getStoragePolicies());
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
+  }
+
+  @Test
+  public void testGetServerDefaults() throws IOException, URISyntaxException {
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      FsServerDefaults defaults = namenode.getClient().getServerDefaults();
+      assertEquals(defaults.getBlockSize(),
+          routerProtocol.getServerDefaults().getBlockSize());
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      assertEquals(defaults.getBlockSize(),
+          routerProtocol.getServerDefaults().getBlockSize());
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
+  }
+
+  @Test
   public void testProxyGetPreferedBlockSize() throws Exception {
 
     // Query via NN and Router and verify
@@ -1012,8 +1047,23 @@ public class TestRouterRpc {
 
   @Test
   public void testProxyVersionRequest() throws Exception {
-    NamespaceInfo rVersion = routerNamenodeProtocol.versionRequest();
-    NamespaceInfo nnVersion = nnNamenodeProtocol.versionRequest();
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      NamespaceInfo rVersion = routerNamenodeProtocol.versionRequest();
+      NamespaceInfo nnVersion = nnNamenodeProtocol.versionRequest();
+      compareVersion(rVersion, nnVersion);
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      rVersion = routerNamenodeProtocol.versionRequest();
+      compareVersion(rVersion, nnVersion);
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
+  }
+
+  private void compareVersion(NamespaceInfo rVersion, NamespaceInfo nnVersion) {
     assertEquals(nnVersion.getBlockPoolID(), rVersion.getBlockPoolID());
     assertEquals(nnVersion.getNamespaceID(), rVersion.getNamespaceID());
     assertEquals(nnVersion.getClusterID(), rVersion.getClusterID());
@@ -1023,8 +1073,24 @@ public class TestRouterRpc {
 
   @Test
   public void testProxyGetBlockKeys() throws Exception {
-    ExportedBlockKeys rKeys = routerNamenodeProtocol.getBlockKeys();
-    ExportedBlockKeys nnKeys = nnNamenodeProtocol.getBlockKeys();
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      ExportedBlockKeys rKeys = routerNamenodeProtocol.getBlockKeys();
+      ExportedBlockKeys nnKeys = nnNamenodeProtocol.getBlockKeys();
+      compareBlockKeys(rKeys, nnKeys);
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      rKeys = routerNamenodeProtocol.getBlockKeys();
+      compareBlockKeys(rKeys, nnKeys);
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
+  }
+
+  private void compareBlockKeys(ExportedBlockKeys rKeys,
+      ExportedBlockKeys nnKeys) {
     assertEquals(nnKeys.getCurrentKey(), rKeys.getCurrentKey());
     assertEquals(nnKeys.getKeyUpdateInterval(), rKeys.getKeyUpdateInterval());
     assertEquals(nnKeys.getTokenLifetime(), rKeys.getTokenLifetime());
@@ -1054,17 +1120,38 @@ public class TestRouterRpc {
 
   @Test
   public void testProxyGetTransactionID() throws IOException {
-    long routerTransactionID = routerNamenodeProtocol.getTransactionID();
-    long nnTransactionID = nnNamenodeProtocol.getTransactionID();
-    assertEquals(nnTransactionID, routerTransactionID);
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      long routerTransactionID = routerNamenodeProtocol.getTransactionID();
+      long nnTransactionID = nnNamenodeProtocol.getTransactionID();
+      assertEquals(nnTransactionID, routerTransactionID);
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      routerTransactionID = routerNamenodeProtocol.getTransactionID();
+      assertEquals(nnTransactionID, routerTransactionID);
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
   }
 
   @Test
   public void testProxyGetMostRecentCheckpointTxId() throws IOException {
-    long routerCheckPointId =
-        routerNamenodeProtocol.getMostRecentCheckpointTxId();
-    long nnCheckPointId = nnNamenodeProtocol.getMostRecentCheckpointTxId();
-    assertEquals(nnCheckPointId, routerCheckPointId);
+    MockResolver resolver =
+        (MockResolver) router.getRouter().getSubclusterResolver();
+    try {
+      // Check with default namespace specified.
+      long routerCheckPointId =
+          routerNamenodeProtocol.getMostRecentCheckpointTxId();
+      long nnCheckPointId = nnNamenodeProtocol.getMostRecentCheckpointTxId();
+      assertEquals(nnCheckPointId, routerCheckPointId);
+      // Check with default namespace unspecified.
+      resolver.setDisableNamespace(true);
+      routerCheckPointId = routerNamenodeProtocol.getMostRecentCheckpointTxId();
+    } finally {
+      resolver.setDisableNamespace(false);
+    }
   }
 
   @Test


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 37/41: HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b28580a6251732f8eabb76251e61e8e8902f3d2b
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Tue Feb 12 10:44:02 2019 -0800

    HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui.
---
 .../federation/metrics/FederationRPCMBean.java     |  2 +
 .../federation/metrics/FederationRPCMetrics.java   | 11 +++
 .../metrics/FederationRPCPerformanceMonitor.java   |  5 ++
 .../router/NoNamenodesAvailableException.java      | 33 +++++++++
 .../server/federation/router/RouterRpcClient.java  | 16 +++-
 .../server/federation/router/RouterRpcMonitor.java |  5 ++
 .../server/federation/FederationTestUtils.java     | 38 ++++++++++
 .../router/TestRouterClientRejectOverload.java     | 86 ++++++++++++++++++++--
 .../router/TestRouterRPCClientRetries.java         |  2 +-
 9 files changed, 188 insertions(+), 10 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
index 973c398..76b3ca6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
@@ -46,6 +46,8 @@ public interface FederationRPCMBean {
 
   long getProxyOpRetries();
 
+  long getProxyOpNoNamenodes();
+
   long getRouterFailureStateStoreOps();
 
   long getRouterFailureReadOnlyOps();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
index cce4b86..8e57c6b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
@@ -60,6 +60,8 @@ public class FederationRPCMetrics implements FederationRPCMBean {
   private MutableCounterLong proxyOpNotImplemented;
   @Metric("Number of operation retries")
   private MutableCounterLong proxyOpRetries;
+  @Metric("Number of operations to hit no namenodes available")
+  private MutableCounterLong proxyOpNoNamenodes;
 
   @Metric("Failed requests due to State Store unavailable")
   private MutableCounterLong routerFailureStateStore;
@@ -138,6 +140,15 @@ public class FederationRPCMetrics implements FederationRPCMBean {
     return proxyOpRetries.value();
   }
 
+  public void incrProxyOpNoNamenodes() {
+    proxyOpNoNamenodes.incr();
+  }
+
+  @Override
+  public long getProxyOpNoNamenodes() {
+    return proxyOpNoNamenodes.value();
+  }
+
   public void incrRouterFailureStateStore() {
     routerFailureStateStore.incr();
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
index 15725d1..cbd63de 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
@@ -171,6 +171,11 @@ public class FederationRPCPerformanceMonitor implements RouterRpcMonitor {
   }
 
   @Override
+  public void proxyOpNoNamenodes() {
+    metrics.incrProxyOpNoNamenodes();
+  }
+
+  @Override
   public void routerFailureStateStore() {
     metrics.incrRouterFailureStateStore();
   }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java
new file mode 100644
index 0000000..7eabf00
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+
+
+/**
+ * Exception when no namenodes are available.
+ */
+public class NoNamenodesAvailableException extends IOException {
+
+  private static final long serialVersionUID = 1L;
+
+  public NoNamenodesAvailableException(String nsId, IOException ioe) {
+    super("No namenodes available under nameservice " + nsId, ioe);
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index f5985ee..d21bde3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -61,6 +61,7 @@ import org.apache.hadoop.io.retry.RetryPolicies;
 import org.apache.hadoop.io.retry.RetryPolicy;
 import org.apache.hadoop.io.retry.RetryPolicy.RetryAction.RetryDecision;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.RetriableException;
 import org.apache.hadoop.ipc.StandbyException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.slf4j.Logger;
@@ -302,8 +303,8 @@ public class RouterRpcClient {
    * @param retryCount Number of retries.
    * @param nsId Nameservice ID.
    * @return Retry decision.
-   * @throws IOException Original exception if the retry policy generates one
-   *                     or IOException for no available namenodes.
+   * @throws NoNamenodesAvailableException Exception that the retry policy
+   *         generates for no available namenodes.
    */
   private RetryDecision shouldRetry(final IOException ioe, final int retryCount,
       final String nsId) throws IOException {
@@ -313,8 +314,7 @@ public class RouterRpcClient {
       if (retryCount == 0) {
         return RetryDecision.RETRY;
       } else {
-        throw new IOException("No namenode available under nameservice " + nsId,
-            ioe);
+        throw new NoNamenodesAvailableException(nsId, ioe);
       }
     }
 
@@ -405,6 +405,14 @@ public class RouterRpcClient {
           StandbyException se = new StandbyException(ioe.getMessage());
           se.initCause(ioe);
           throw se;
+        } else if (ioe instanceof NoNamenodesAvailableException) {
+          if (this.rpcMonitor != null) {
+            this.rpcMonitor.proxyOpNoNamenodes();
+          }
+          LOG.error("Can not get available namenode for {} {} error: {}",
+              nsId, rpcAddress, ioe.getMessage());
+          // Throw RetriableException so that client can retry
+          throw new RetriableException(ioe);
         } else {
           // Other communication error, this is a failure
           // Communication retries are handled by the retry policy
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcMonitor.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcMonitor.java
index 7af71af..5a2adb9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcMonitor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcMonitor.java
@@ -93,6 +93,11 @@ public interface RouterRpcMonitor {
   void proxyOpRetries();
 
   /**
+   * Failed to proxy an operation because of no namenodes available.
+   */
+  void proxyOpNoNamenodes();
+
+  /**
    * If the Router cannot contact the State Store in an operation.
    */
   void routerFailureStateStore();
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
index d92edac..5434224 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/FederationTestUtils.java
@@ -48,6 +48,7 @@ import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.NamenodeContext;
 import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
 import org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
@@ -374,4 +375,41 @@ public final class FederationTestUtils {
     Whitebox.setInternalState(rpcClient, "connectionManager",
         spyConnectionManager);
   }
+
+  /**
+   * Switch namenodes of all hdfs name services to standby.
+   * @param cluster a federated HDFS cluster
+   */
+  public static void transitionClusterNSToStandby(
+      StateStoreDFSCluster cluster) {
+    // Name services of the cluster
+    List<String> nameServiceList = cluster.getNameservices();
+
+    // Change namenodes of each name service to standby
+    for (String nameService : nameServiceList) {
+      List<NamenodeContext>  nnList = cluster.getNamenodes(nameService);
+      for(NamenodeContext namenodeContext : nnList) {
+        cluster.switchToStandby(nameService, namenodeContext.getNamenodeId());
+      }
+    }
+  }
+
+  /**
+   * Switch the index namenode of all hdfs name services to active.
+   * @param cluster a federated HDFS cluster
+   * @param index the index of namenodes
+   */
+  public static void transitionClusterNSToActive(
+      StateStoreDFSCluster cluster, int index) {
+    // Name services of the cluster
+    List<String> nameServiceList = cluster.getNameservices();
+
+    // Change the index namenode of each name service to active
+    for (String nameService : nameServiceList) {
+      List<NamenodeContext> listNamenodeContext =
+          cluster.getNamenodes(nameService);
+      cluster.switchToActive(nameService,
+          listNamenodeContext.get(index).getNamenodeId());
+    }
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
index 0664159..14bd7b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.hdfs.server.federation.router;
 
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.simulateSlowNamenode;
 import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.simulateThrowExceptionRouterRpcServer;
+import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.transitionClusterNSToStandby;
+import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.transitionClusterNSToActive;
 import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -27,6 +29,7 @@ import static org.junit.Assert.fail;
 import java.io.IOException;
 import java.net.URI;
 import java.util.ArrayList;
+import java.util.Collection;
 import java.util.List;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -46,7 +49,9 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.ipc.RemoteException;
 import org.apache.hadoop.ipc.StandbyException;
 import org.junit.After;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -71,14 +76,19 @@ public class TestRouterClientRejectOverload {
     }
   }
 
-  private void setupCluster(boolean overloadControl) throws Exception {
+  @Rule
+  public ExpectedException exceptionRule = ExpectedException.none();
+
+  private void setupCluster(boolean overloadControl, boolean ha)
+      throws Exception {
     // Build and start a federated cluster
-    cluster = new StateStoreDFSCluster(false, 2);
+    cluster = new StateStoreDFSCluster(ha, 2);
     Configuration routerConf = new RouterConfigBuilder()
         .stateStore()
         .metrics()
         .admin()
         .rpc()
+        .heartbeat()
         .build();
 
     // Reduce the number of RPC clients threads to overload the Router easy
@@ -98,7 +108,7 @@ public class TestRouterClientRejectOverload {
 
   @Test
   public void testWithoutOverloadControl() throws Exception {
-    setupCluster(false);
+    setupCluster(false, false);
 
     // Nobody should get overloaded
     testOverloaded(0);
@@ -121,7 +131,7 @@ public class TestRouterClientRejectOverload {
 
   @Test
   public void testOverloadControl() throws Exception {
-    setupCluster(true);
+    setupCluster(true, false);
 
     List<RouterContext> routers = cluster.getRouters();
     FederationRPCMetrics rpcMetrics0 =
@@ -244,7 +254,7 @@ public class TestRouterClientRejectOverload {
 
   @Test
   public void testConnectionNullException() throws Exception {
-    setupCluster(false);
+    setupCluster(false, false);
 
     // Choose 1st router
     RouterContext routerContext = cluster.getRouters().get(0);
@@ -280,4 +290,70 @@ public class TestRouterClientRejectOverload {
     assertEquals(originalRouter1Failures,
         rpcMetrics1.getProxyOpFailureCommunicate());
   }
+
+  /**
+   * When failover occurs, no namenodes are available within a short time.
+   * Client will success after some retries.
+   */
+  @Test
+  public void testNoNamenodesAvailable() throws Exception{
+    setupCluster(false, true);
+
+    transitionClusterNSToStandby(cluster);
+
+    Configuration conf = cluster.getRouterClientConf();
+    // Set dfs.client.failover.random.order false, to pick 1st router at first
+    conf.setBoolean("dfs.client.failover.random.order", false);
+
+    // Retries is 3 (see FailoverOnNetworkExceptionRetry#shouldRetry, will fail
+    // when reties > max.attempts), so total access is 4.
+    conf.setInt("dfs.client.retry.max.attempts", 2);
+    DFSClient routerClient = new DFSClient(new URI("hdfs://fed"), conf);
+
+    // Get router0 metrics
+    FederationRPCMetrics rpcMetrics0 = cluster.getRouters().get(0)
+        .getRouter().getRpcServer().getRPCMetrics();
+    // Get router1 metrics
+    FederationRPCMetrics rpcMetrics1 = cluster.getRouters().get(1)
+        .getRouter().getRpcServer().getRPCMetrics();
+
+    // Original failures
+    long originalRouter0Failures = rpcMetrics0.getProxyOpNoNamenodes();
+    long originalRouter1Failures = rpcMetrics1.getProxyOpNoNamenodes();
+
+    // GetFileInfo will throw Exception
+    String exceptionMessage = "org.apache.hadoop.hdfs.server.federation."
+        + "router.NoNamenodesAvailableException: No namenodes available "
+        + "under nameservice ns0";
+    exceptionRule.expect(RemoteException.class);
+    exceptionRule.expectMessage(exceptionMessage);
+    routerClient.getFileInfo("/");
+
+    // Router 0 failures will increase
+    assertEquals(originalRouter0Failures + 4,
+        rpcMetrics0.getProxyOpNoNamenodes());
+    // Router 1 failures do not change
+    assertEquals(originalRouter1Failures,
+        rpcMetrics1.getProxyOpNoNamenodes());
+
+    // Make name services available
+    transitionClusterNSToActive(cluster, 0);
+    for (RouterContext routerContext : cluster.getRouters()) {
+      // Manually trigger the heartbeat
+      Collection<NamenodeHeartbeatService> heartbeatServices = routerContext
+          .getRouter().getNamenodeHearbeatServices();
+      for (NamenodeHeartbeatService service : heartbeatServices) {
+        service.periodicInvoke();
+      }
+      // Update service cache
+      routerContext.getRouter().getStateStore().refreshCaches(true);
+    }
+
+    originalRouter0Failures = rpcMetrics0.getProxyOpNoNamenodes();
+
+    // RPC call must be successful
+    routerClient.getFileInfo("/");
+    // Router 0 failures do not change
+    assertEquals(originalRouter0Failures, rpcMetrics0.getProxyOpNoNamenodes());
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
index f84e9a0..8772e2f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCClientRetries.java
@@ -133,7 +133,7 @@ public class TestRouterRPCClientRetries {
     } catch (RemoteException e) {
       String ns0 = cluster.getNameservices().get(0);
       assertExceptionContains(
-          "No namenode available under nameservice " + ns0, e);
+          "No namenodes available under nameservice " + ns0, e);
     }
 
     // Verify the retry times, it should only retry one time.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 24/41: HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.

Posted by in...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a73cfffa8eaf2e1a8418a1f2efed9b7d6ce5f59c
Author: Inigo Goiri <in...@apache.org>
AuthorDate: Fri Jan 11 10:11:18 2019 -0800

    HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.
---
 .../federation/router/RouterAdminServer.java       |  26 ++-
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  72 ++++++
 .../src/site/markdown/HDFSRouterFederation.md      |   6 +
 .../router/TestRouterAdminGenericRefresh.java      | 252 +++++++++++++++++++++
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 5 files changed, 357 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 18c19e0..027dd11 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -23,12 +23,14 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.util.Collection;
 import java.util.Set;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HDFSPolicyProvider;
+import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
@@ -64,9 +66,15 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableE
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.ipc.GenericRefreshProtocol;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.ipc.RefreshRegistry;
+import org.apache.hadoop.ipc.RefreshResponse;
+import org.apache.hadoop.ipc.proto.GenericRefreshProtocolProtos;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolServerSideTranslatorPB;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
@@ -81,7 +89,8 @@ import com.google.protobuf.BlockingService;
  * router. It is created, started, and stopped by {@link Router}.
  */
 public class RouterAdminServer extends AbstractService
-    implements MountTableManager, RouterStateManager, NameserviceManager {
+    implements MountTableManager, RouterStateManager, NameserviceManager,
+    GenericRefreshProtocol {
 
   private static final Logger LOG =
       LoggerFactory.getLogger(RouterAdminServer.class);
@@ -160,6 +169,15 @@ public class RouterAdminServer extends AbstractService
     router.setAdminServerAddress(this.adminAddress);
     iStateStoreCache =
         router.getSubclusterResolver() instanceof StateStoreCache;
+
+    GenericRefreshProtocolServerSideTranslatorPB genericRefreshXlator =
+        new GenericRefreshProtocolServerSideTranslatorPB(this);
+    BlockingService genericRefreshService =
+        GenericRefreshProtocolProtos.GenericRefreshProtocolService.
+        newReflectiveBlockingService(genericRefreshXlator);
+
+    DFSUtil.addPBProtocol(conf, GenericRefreshProtocolPB.class,
+        genericRefreshService, adminServer);
   }
 
   /**
@@ -487,4 +505,10 @@ public class RouterAdminServer extends AbstractService
   public static String getSuperGroup(){
     return superGroup;
   }
+
+  @Override // GenericRefreshProtocol
+  public Collection<RefreshResponse> refresh(String identifier, String[] args) {
+    // Let the registry handle as needed
+    return RefreshRegistry.defaultRegistry().dispatch(identifier, args);
+  }
 }
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 27c42cd..37aad88 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.hdfs.tools.federation;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.util.Arrays;
+import java.util.Collection;
 import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
@@ -26,8 +28,10 @@ import java.util.Map;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
@@ -61,9 +65,14 @@ import org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableE
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.ipc.RefreshResponse;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
 import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
@@ -147,6 +156,8 @@ public class RouterAdmin extends Configured implements Tool {
       return "\t[-getDisabledNameservices]";
     } else if (cmd.equals("-refresh")) {
       return "\t[-refresh]";
+    } else if (cmd.equals("-refreshRouterArgs")) {
+      return "\t[-refreshRouterArgs <host:ipc_port> <key> [arg1..argn]]";
     }
     return getUsage(null);
   }
@@ -213,6 +224,10 @@ public class RouterAdmin extends Configured implements Tool {
       if (argv.length < 3) {
         return false;
       }
+    } else if ("-refreshRouterArgs".equals(cmd)) {
+      if (argv.length < 2) {
+        return false;
+      }
     }
     return true;
   }
@@ -310,6 +325,8 @@ public class RouterAdmin extends Configured implements Tool {
         getDisabledNameservices();
       } else if ("-refresh".equals(cmd)) {
         refresh(address);
+      } else if ("-refreshRouterArgs".equals(cmd)) {
+        exitCode = genericRefresh(argv, i);
       } else {
         throw new IllegalArgumentException("Unknown Command: " + cmd);
       }
@@ -923,6 +940,61 @@ public class RouterAdmin extends Configured implements Tool {
     }
   }
 
+  public int genericRefresh(String[] argv, int i) throws IOException {
+    String hostport = argv[i++];
+    String identifier = argv[i++];
+    String[] args = Arrays.copyOfRange(argv, i, argv.length);
+
+    // Get the current configuration
+    Configuration conf = getConf();
+
+    // for security authorization
+    // server principal for this call
+    // should be NN's one.
+    conf.set(CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY,
+        conf.get(DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, ""));
+
+    // Create the client
+    Class<?> xface = GenericRefreshProtocolPB.class;
+    InetSocketAddress address = NetUtils.createSocketAddr(hostport);
+    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+
+    RPC.setProtocolEngine(conf, xface, ProtobufRpcEngine.class);
+    GenericRefreshProtocolPB proxy = (GenericRefreshProtocolPB)RPC.getProxy(
+        xface, RPC.getProtocolVersion(xface), address, ugi, conf,
+        NetUtils.getDefaultSocketFactory(conf), 0);
+
+    Collection<RefreshResponse> responses = null;
+    try (GenericRefreshProtocolClientSideTranslatorPB xlator =
+        new GenericRefreshProtocolClientSideTranslatorPB(proxy)) {
+      // Refresh
+      responses = xlator.refresh(identifier, args);
+
+      int returnCode = 0;
+
+      // Print refresh responses
+      System.out.println("Refresh Responses:\n");
+      for (RefreshResponse response : responses) {
+        System.out.println(response.toString());
+
+        if (returnCode == 0 && response.getReturnCode() != 0) {
+          // This is the first non-zero return code, so we should return this
+          returnCode = response.getReturnCode();
+        } else if (returnCode != 0 && response.getReturnCode() != 0) {
+          // Then now we have multiple non-zero return codes,
+          // so we merge them into -1
+          returnCode = -1;
+        }
+      }
+      return returnCode;
+    } finally {
+      if (responses == null) {
+        System.out.println("Failed to get response.\n");
+        return -1;
+      }
+    }
+  }
+
   /**
    * Normalize a path for that filesystem.
    *
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 959cd63..bcf8fa9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -274,6 +274,12 @@ For example, one can disable `ns1`, list it and enable it again:
 
 This is useful when decommissioning subclusters or when one subcluster is missbehaving (e.g., low performance or unavailability).
 
+### Router server generically refresh
+
+To trigger a runtime-refresh of the resource specified by \<key\> on \<host:ipc\_port\>. For example, to enable white list checking, we just need to send a refresh command other than restart the router server.
+
+    [hdfs]$ $HADOOP_HOME/bin/hdfs dfsrouteradmin -refreshRouterArgs <host:ipc_port> <key> [arg1..argn]
+
 Client configuration
 --------------------
 
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminGenericRefresh.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminGenericRefresh.java
new file mode 100644
index 0000000..fd68116
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminGenericRefresh.java
@@ -0,0 +1,252 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.tools.federation.RouterAdmin;
+import org.apache.hadoop.ipc.RefreshHandler;
+import org.apache.hadoop.ipc.RefreshRegistry;
+import org.apache.hadoop.ipc.RefreshResponse;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.io.IOException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Before all tests, a router is spun up.
+ * Before each test, mock refresh handlers are created and registered.
+ * After each test, the mock handlers are unregistered.
+ * After all tests, the router is spun down.
+ */
+public class TestRouterAdminGenericRefresh {
+  private static Router router;
+  private static RouterAdmin admin;
+
+  private static RefreshHandler firstHandler;
+  private static RefreshHandler secondHandler;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+
+    // Build and start a router with admin + RPC
+    router = new Router();
+    Configuration config = new RouterConfigBuilder()
+        .admin()
+        .rpc()
+        .build();
+    router.init(config);
+    router.start();
+    admin = new RouterAdmin(config);
+  }
+
+  @AfterClass
+  public static void tearDownBeforeClass() throws IOException {
+    if (router != null) {
+      router.stop();
+      router.close();
+    }
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    // Register Handlers, first one just sends an ok response
+    firstHandler = Mockito.mock(RefreshHandler.class);
+    Mockito.when(firstHandler.handleRefresh(Mockito.anyString(),
+        Mockito.any(String[].class))).thenReturn(
+            RefreshResponse.successResponse());
+    RefreshRegistry.defaultRegistry().register("firstHandler", firstHandler);
+
+    // Second handler has conditional response for testing args
+    secondHandler = Mockito.mock(RefreshHandler.class);
+    Mockito.when(secondHandler.handleRefresh(
+        "secondHandler", new String[]{"one", "two"})).thenReturn(
+            new RefreshResponse(3, "three"));
+    Mockito.when(secondHandler.handleRefresh(
+        "secondHandler", new String[]{"one"})).thenReturn(
+            new RefreshResponse(2, "two"));
+    RefreshRegistry.defaultRegistry().register("secondHandler", secondHandler);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    RefreshRegistry.defaultRegistry().unregisterAll("firstHandler");
+    RefreshRegistry.defaultRegistry().unregisterAll("secondHandler");
+  }
+
+  @Test
+  public void testInvalidCommand() throws Exception {
+    String[] args = new String[]{"-refreshRouterArgs", "nn"};
+    int exitCode = admin.run(args);
+    assertEquals("RouterAdmin should fail due to bad args", -1, exitCode);
+  }
+
+  @Test
+  public void testInvalidIdentifier() throws Exception {
+    String[] argv = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "unregisteredIdentity"};
+    int exitCode = admin.run(argv);
+    assertEquals("RouterAdmin should fail due to no handler registered",
+        -1, exitCode);
+  }
+
+  @Test
+  public void testValidIdentifier() throws Exception {
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "firstHandler"};
+    int exitCode = admin.run(args);
+    assertEquals("RouterAdmin should succeed", 0, exitCode);
+
+    Mockito.verify(firstHandler).handleRefresh("firstHandler", new String[]{});
+    // Second handler was never called
+    Mockito.verify(secondHandler, Mockito.never())
+      .handleRefresh(Mockito.anyString(), Mockito.any(String[].class));
+  }
+
+  @Test
+  public void testVariableArgs() throws Exception {
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "secondHandler", "one"};
+    int exitCode = admin.run(args);
+    assertEquals("RouterAdmin should return 2", 2, exitCode);
+
+    exitCode = admin.run(new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(),
+        "secondHandler", "one", "two"});
+    assertEquals("RouterAdmin should now return 3", 3, exitCode);
+
+    Mockito.verify(secondHandler).handleRefresh(
+        "secondHandler", new String[]{"one"});
+    Mockito.verify(secondHandler).handleRefresh(
+        "secondHandler", new String[]{"one", "two"});
+  }
+
+  @Test
+  public void testUnregistration() throws Exception {
+    RefreshRegistry.defaultRegistry().unregisterAll("firstHandler");
+
+    // And now this should fail
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "firstHandler"};
+    int exitCode = admin.run(args);
+    assertEquals("RouterAdmin should return -1", -1, exitCode);
+  }
+
+  @Test
+  public void testUnregistrationReturnValue() {
+    RefreshHandler mockHandler = Mockito.mock(RefreshHandler.class);
+    RefreshRegistry.defaultRegistry().register("test", mockHandler);
+    boolean ret = RefreshRegistry.defaultRegistry().
+        unregister("test", mockHandler);
+    assertTrue(ret);
+  }
+
+  @Test
+  public void testMultipleRegistration() throws Exception {
+    RefreshRegistry.defaultRegistry().register("sharedId", firstHandler);
+    RefreshRegistry.defaultRegistry().register("sharedId", secondHandler);
+
+    // this should trigger both
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "sharedId", "one"};
+    int exitCode = admin.run(args);
+
+    // -1 because one of the responses is unregistered
+    assertEquals(-1, exitCode);
+
+    // verify we called both
+    Mockito.verify(firstHandler).handleRefresh(
+        "sharedId", new String[]{"one"});
+    Mockito.verify(secondHandler).handleRefresh(
+        "sharedId", new String[]{"one"});
+
+    RefreshRegistry.defaultRegistry().unregisterAll("sharedId");
+  }
+
+  @Test
+  public void testMultipleReturnCodeMerging() throws Exception {
+    // Two handlers which return two non-zero values
+    RefreshHandler handlerOne = Mockito.mock(RefreshHandler.class);
+    Mockito.when(handlerOne.handleRefresh(Mockito.anyString(),
+        Mockito.any(String[].class))).thenReturn(
+            new RefreshResponse(23, "Twenty Three"));
+
+    RefreshHandler handlerTwo = Mockito.mock(RefreshHandler.class);
+    Mockito.when(handlerTwo.handleRefresh(Mockito.anyString(),
+        Mockito.any(String[].class))).thenReturn(
+            new RefreshResponse(10, "Ten"));
+
+    // Then registered to the same ID
+    RefreshRegistry.defaultRegistry().register("shared", handlerOne);
+    RefreshRegistry.defaultRegistry().register("shared", handlerTwo);
+
+    // We refresh both
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "shared"};
+    int exitCode = admin.run(args);
+
+    // We get -1 because of our logic for melding non-zero return codes
+    assertEquals(-1, exitCode);
+
+    // Verify we called both
+    Mockito.verify(handlerOne).handleRefresh("shared", new String[]{});
+    Mockito.verify(handlerTwo).handleRefresh("shared", new String[]{});
+
+    RefreshRegistry.defaultRegistry().unregisterAll("shared");
+  }
+
+  @Test
+  public void testExceptionResultsInNormalError() throws Exception {
+    // In this test, we ensure that all handlers are called
+    // even if we throw an exception in one
+    RefreshHandler exceptionalHandler = Mockito.mock(RefreshHandler.class);
+    Mockito.when(exceptionalHandler.handleRefresh(Mockito.anyString(),
+        Mockito.any(String[].class))).thenThrow(
+            new RuntimeException("Exceptional Handler Throws Exception"));
+
+    RefreshHandler otherExceptionalHandler = Mockito.mock(RefreshHandler.class);
+    Mockito.when(otherExceptionalHandler.handleRefresh(Mockito.anyString(),
+        Mockito.any(String[].class))).thenThrow(
+            new RuntimeException("More Exceptions"));
+
+    RefreshRegistry.defaultRegistry().register("exceptional",
+        exceptionalHandler);
+    RefreshRegistry.defaultRegistry().register("exceptional",
+        otherExceptionalHandler);
+
+    String[] args = new String[]{"-refreshRouterArgs", "localhost:" +
+        router.getAdminServerAddress().getPort(), "exceptional"};
+    int exitCode = admin.run(args);
+    assertEquals(-1, exitCode); // Exceptions result in a -1
+
+    Mockito.verify(exceptionalHandler).handleRefresh(
+        "exceptional", new String[]{});
+    Mockito.verify(otherExceptionalHandler).handleRefresh(
+        "exceptional", new String[]{});
+
+    RefreshRegistry.defaultRegistry().unregisterAll("exceptional");
+  }
+}
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 5bfb0cb..c3f113d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -438,6 +438,7 @@ Usage:
           [-nameservice disable | enable <nameservice>]
           [-getDisabledNameservices]
           [-refresh]
+          [-refreshRouterArgs <host:ipc_port> <key> [arg1..argn]]
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
@@ -451,6 +452,7 @@ Usage:
 | `-nameservice` `disable` `enable` *nameservice* | Disable/enable  a name service from the federation. If disabled, requests will not go to that name service. |
 | `-getDisabledNameservices` | Get the name services that are disabled in the federation. |
 | `-refresh` | Update mount table cache of the connected router. |
+| `refreshRouterArgs` \<host:ipc\_port\> \<key\> [arg1..argn] | To trigger a runtime-refresh of the resource specified by \<key\> on \<host:ipc\_port\>. For example, to enable white list checking, we just need to send a refresh command other than restart the router server. |
 
 The commands for managing Router-based federation. See [Mount table management](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Mount_table_management) for more info.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org