You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/05/31 21:26:34 UTC

[GitHub] [hudi] umehrot2 commented on a diff in pull request #5113: [HUDI-3625] [RFC-48] Optimized storage layout for Cloud Object Stores

umehrot2 commented on code in PR #5113:
URL: https://github.com/apache/hudi/pull/5113#discussion_r860314414


##########
rfc/rfc-48/rfc-48.md:
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to

Review Comment:
   Yes, will mention that as well.



##########
rfc/rfc-48/rfc-48.md:
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The high level proposal here is to introduce a new storage layout, where all files are distributed evenly across multiple
+randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix. This
+would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+## Design
+
+### Generating file paths
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet

Review Comment:
   I think log file should also be treated as just another file, and can end up under any prefix. Either ways files from different file slices within the file group are ending up under different prefixes. So, I see no strong reason for having files of same file slice under same prefix. Will add this specifically in the RFC for MOR scenario.
   
   However, we should design in such a way that if someone really wants all files of a file slice under same prefix, it should be possible to implement it via another implementation of the strategy interface.



##########
rfc/rfc-48/rfc-48.md:
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the

Review Comment:
   Well for the example you provided for 100MB files, I believe approx 600 GB/sec needs to be read as it needs to hit 6000 GET requests. However, besides just reading of files, Hudi does a bunch of other GET requests to read the metadata, timeline, marker files, metadata list index and other indexes now.



##########
rfc/rfc-48/rfc-48.md:
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle

Review Comment:
   Looked in google cloud storage, and they have throttling limits and even recommend similar randomized prefix based solutions: https://cloud.google.com/storage/docs/request-rate



##########
rfc/rfc-48/rfc-48.md:
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the
+same table path prefix could result in these request limits being hit for the table prefix, specially as workloads
+scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to
+re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to
+succeed either and continue to be throttled.
+
+The high level proposal here is to introduce a new storage layout, where all files are distributed evenly across multiple
+randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix. This
+would help distribute the requests evenly across different prefixes, resulting in Amazon S3 to create partitions for
+the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit
+for a specific prefix/partition.
+
+## Design
+
+### Generating file paths
+
+We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage
+layout of keeping them under a common table path/prefix. In addition to the `Table Path`, for this new layout user will
+configure another `Table Storage Path` under which the actual data files will be distributed. The original `Table Path` will
+be used to maintain the table/partitions Hudi metadata.
+
+For the purpose of this documentation lets assume:
+```
+Table Path => s3://<table_bucket>/<hudi_table_name>/
+
+Table Storage Path => s3://<table_storage_bucket>/
+```
+Note: `Table Storage Path` can be a path in the same Amazon S3 bucket or a different bucket. For best results,
+`Table Storage Path` should be a bucket instead of a prefix under the bucket as it allows for S3 to partition sooner.
+
+We will use a Hashing function on the `File Name` to map them to a prefix generated under `Table Storage Path`:
+```
+s3://<table_storage_bucket>/<hash_prefix>/..
+```
+
+In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This
+folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for
+the table (discussed more in the next section). Here is how the final layout would look like for `partitioned` tables:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+...
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+...
+```
+
+For `non-partitioned` tables, this is how it would look:
+```
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet
+s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet
+s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet
+...
+```
+
+The original table path will continue to store the `metadata folder` and `partition metadata` files:
+```
+s3://<table_bucket>/<hudi_table_name>/.hoodie/...
+s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata
+s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata
+...
+```
+
+#### Hashing
+
+To generate the prefixes we can use `Murmur 32 bit` hash on the `File Names`, which is known for being fast and provides

Review Comment:
   Interesting. Yeah I see its using something called XX Hash  (for 32 bit hashes) which is supposed to be quite fast. Will try that one first.



##########
rfc/README.md:
##########
@@ -71,3 +71,4 @@ The list of all RFCs can be found here.
 | 45 | [Asynchronous Metadata Indexing](./rfc-45/rfc-45.md) | `UNDER REVIEW` |
 | 46 | [Optimizing Record Payload Handling](./rfc-46/rfc-46.md) | `UNDER REVIEW` |
 | 47 | [Add Call Produce Command for Spark SQL](./rfc-47/rfc-47.md) | `UNDER REVIEW` |
+| 48 | [Optimized storage layout for Cloud object stores](./rfc-48/rfc-48.md) | `UNDER REVIEW` |

Review Comment:
   Hmm I somehow feel that this is more of a layout strategy, in terms of how file layout is organized on the underlying storage. But, I see that the term `layout` has already been reserved as `HoodieStorageLayout` to differentiate between default and bucketized layout, as well in clustering.
   
   So, I guess we can change it to `Federated storage layer` instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org