You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/09/20 22:41:58 UTC

[GitHub] [hudi] alexeykudinkin commented on a diff in pull request #5113: [HUDI-3625] [RFC-60] Optimized storage layout for Cloud Object Stores

alexeykudinkin commented on code in PR #5113:
URL: https://github.com/apache/hudi/pull/5113#discussion_r975852260


##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for

Review Comment:
   nit: "S3 creating"



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+#####Option 1:
+We can re-use the implementations is `HashID` class to generate hash on `File Name` or `Partition + File Name`, which
+uses XX hash function with 32/64 bits (known for being fast).
+
+#####Option 2:

Review Comment:
   I think we can collapse both of these sections into one calling out that we plan to support different hashing engines



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);

Review Comment:
   I have a concern similar to @prasannarajaperumal: if we're talking about the URI here we need to clearly define the encoding for such URI. Having something like Hadoop's `Path` or Java's `URI` would have been more clear in terms of expectations. 
   
   WDYT?



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.

Review Comment:
   Chatted about this w/ @umehrot2 offline: 
   
    - Logical partitioning is only addressing this issues w/in the scope of a single table, while resorting to StorageStrategy would allow us to also address the throttling issue that might be occurring on the x-table level (when multiple tables are nested under one "datalake" folder)



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+#####Option 1:
+We can re-use the implementations is `HashID` class to generate hash on `File Name` or `Partition + File Name`, which
+uses XX hash function with 32/64 bits (known for being fast).
+
+#####Option 2:
+To generate the prefixes we can use `Murmur 32 bit` hash, which is known for being fast and provides good distribution
+guarantees. We might have to further do bucketing and re-hash it to reduce the number of possible hashes from 2^32 to a
+slightly lower number, as it may be overkill to have that many unique hashes, which might result in scenarios

Review Comment:
   I think 1 prefix per file would be impractical for 99% of the cases, and we should call out that instead we will support fixed number of buckets configured by the user as buckets



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...

Review Comment:
   This also seems a bit confusing and may give false impression that the table is empty. Shall we use some faux hash prefix like `0000000` to make paths coherent b/w the base files and metadata?



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.

Review Comment:
   I think we should rephrase this that storing it w/in a folder make tables share the prefix, and storing the table at the top level allows to avoid that



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+#####Option 1:
+We can re-use the implementations is `HashID` class to generate hash on `File Name` or `Partition + File Name`, which
+uses XX hash function with 32/64 bits (known for being fast).
+
+#####Option 2:
+To generate the prefixes we can use `Murmur 32 bit` hash, which is known for being fast and provides good distribution
+guarantees. We might have to further do bucketing and re-hash it to reduce the number of possible hashes from 2^32 to a
+slightly lower number, as it may be overkill to have that many unique hashes, which might result in scenarios
+where each file is under a different prefix.
+
+The hashing function should be made user configurable.
+
+### Maintain mapping to files
+
+In [RFC-15](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427331), we introduced an internal
+Metadata Table with a `files` partition that maintains mapping from partitions to list of files in the partition stored
+under `Table Path`. This mapping is kept up to date, as operations are performed on the original table. We will leverage
+the same to now maintain mappings to files stored at `Table Storage Path` under different prefixes.
+
+Here are some of the design considerations:
+
+1. Metadata table is a pre-requisite for federated storage to work. Since Hudi 0.11 we have enabled metadata table by
+default and hence this feature can be enabled by the users as long as they are not explicitly turning off metadata
+table, in which case we should throw an exception.
+
+2. The federated storage cannot be enabled on an existing table that is already bootstrapped with Hive storage
+layout. To switch to federated storage, the table will need to be re-bootstrapped with the new layout.
+
+3. The Instant metadata (`HoodieCommitMetadata`,`HoodieCleanMetadata` etc.) will always act as the source of file listing
+for metadata table to be populated.
+
+4. `HoodieCommitMetadata` currently stores `file name` instead of complete `file path`. We will have to modify commit
+metadata to store the complete file path instead of just file name, as the files are now distributed across several random
+prefix paths instead of a derivable table/partition path.
+
+5. If there is an error reading from Metadata table, we will not fall back listing from file system.

Review Comment:
   How this would be possible w/o File System supporting Federated Storage (providing Hudi's components w/ a virtual view of the table, see my comment above for more context)?



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+#####Option 1:
+We can re-use the implementations is `HashID` class to generate hash on `File Name` or `Partition + File Name`, which

Review Comment:
   Typo



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under
+the same prefix as the base parquet file for the FileSlice to which it belongs.
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+#####Option 1:
+We can re-use the implementations is `HashID` class to generate hash on `File Name` or `Partition + File Name`, which
+uses XX hash function with 32/64 bits (known for being fast).
+
+#####Option 2:
+To generate the prefixes we can use `Murmur 32 bit` hash, which is known for being fast and provides good distribution
+guarantees. We might have to further do bucketing and re-hash it to reduce the number of possible hashes from 2^32 to a
+slightly lower number, as it may be overkill to have that many unique hashes, which might result in scenarios
+where each file is under a different prefix.
+
+The hashing function should be made user configurable.
+
+### Maintain mapping to files
+
+In [RFC-15](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427331), we introduced an internal
+Metadata Table with a `files` partition that maintains mapping from partitions to list of files in the partition stored
+under `Table Path`. This mapping is kept up to date, as operations are performed on the original table. We will leverage
+the same to now maintain mappings to files stored at `Table Storage Path` under different prefixes.
+
+Here are some of the design considerations:
+
+1. Metadata table is a pre-requisite for federated storage to work. Since Hudi 0.11 we have enabled metadata table by
+default and hence this feature can be enabled by the users as long as they are not explicitly turning off metadata
+table, in which case we should throw an exception.
+
+2. The federated storage cannot be enabled on an existing table that is already bootstrapped with Hive storage
+layout. To switch to federated storage, the table will need to be re-bootstrapped with the new layout.
+
+3. The Instant metadata (`HoodieCommitMetadata`,`HoodieCleanMetadata` etc.) will always act as the source of file listing
+for metadata table to be populated.
+
+4. `HoodieCommitMetadata` currently stores `file name` instead of complete `file path`. We will have to modify commit
+metadata to store the complete file path instead of just file name, as the files are now distributed across several random
+prefix paths instead of a derivable table/partition path.

Review Comment:
   +1 
   
   Storing absolute paths is a can of worms we want to avoid opening at all costs.
   
   Before diving into why it should be avoided, i want to step back and first get alignment on the following critical premise -- any federation should be _completely transparent_ to all of the Hudi components, ie all of the components (beside File Access layer) should not be even aware that there's any storage federation happening: we're essentially should create a _virtual FS view_ of the table where all files are still co-located w/in the same partition, and _real FS view_ where they could be stored following arbitrary strategy. That way we can guarantee, that no high-level component relying on provided FS views will be impacts and will still function as expected.
   
   Now with that in mind, i envision following issues if we depart from this tenet:
   
    - Biggest issues with absolute paths is that storing absolute paths exposes other Hudi components to the table layout and requires them to be aware of how the table is stored which will quickly go out of hand as the number of strategies will grow. We should make sure that higher-level components are not exposed to the storage layout and have their own view of the table which is still having files w/in the partition co-located (see paragraph above for more details)
   
    - Storing absolute paths will also make table's relocations impossible (it's not out of the realm of possibilities that you might want to relocate your table from one bucket to another, to switch regions for ex)
   
    - It will also entail increased storage footprint (in MT for ex)



##########
rfc/rfc-56/rfc-56.md:
##########
@@ -0,0 +1,226 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-56: Federated Storage Layer
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them
+to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The traditional storage layout also tightly couples the partitions as folders under the table path. However,
+some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores,
+hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under
+a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
+
+The high level proposal here is to introduce a new storage layout strategy, where all files are distributed evenly across
+multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix.
+This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for
+distributing files if using the traditional Hive storage layout or federated storage layer (proposed in this RFC) does
+not meet their use-case.
+
+## Design
+
+### Interface
+
+```java
+/**
+ * Interface for providing storage file locations.
+ */
+public interface FederatedStorageStrategy extends Serializable {
+  /**
+   * Return a fully-qualified storage file location for the given filename.
+   *
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String fileName);
+
+  /**
+   * Return a fully-qualified storage file location for the given partition and filename.
+   *
+   * @param partitionPath partition path for the file
+   * @param fileName data file name
+   * @return a fully-qualified location URI for a data file
+   */
+  String storageLocation(String partitionPath, String fileName);
+}
+```
+
+### Generating file paths for Cloud storage optimized layout
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26
+...
+```
+**Note**: For `Merge on Read` tables, the log files will also go through the same hashing process and may not end up under

Review Comment:
   This is very counter-intuitive and would be really hard to reconstruct table's state from that. We should make sure we're not detaching the log-files from their base-files



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org