You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/03/24 20:57:53 UTC

[GitHub] [hudi] umehrot2 commented on a change in pull request #5113: [HUDI-3625] [RFC-48] Optimized storage layout for Cloud Object Stores

umehrot2 commented on a change in pull request #5113:
URL: https://github.com/apache/hudi/pull/5113#discussion_r834721194



##########
File path: rfc/rfc-48/rfc-48.md
##########
@@ -0,0 +1,171 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+# RFC-[48]: Optimized storage layout for Cloud Object Stores
+
+## Proposers
+- @umehrot2
+
+## Approvers
+- @vinoth
+- @shivnarayan
+
+## Status
+
+JIRA: [https://issues.apache.org/jira/browse/HUDI-3625](https://issues.apache.org/jira/browse/HUDI-3625)
+
+## Abstract
+
+As you scale your Apache Hudi workloads over Cloud object stores like Amazon S3, there is potential of hitting request
+throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage
+layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and
+significantly reduce throttling.
+
+## Background
+
+Apache Hudi follows the traditional Hive storage layout while writing files on storage:
+- Partitioned Tables: The files are distributed across multiple physical partition folders, under the table's base path.
+- Non Partitioned Tables: The files are stored directly under the table's base path.
+
+While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when
+working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores [throttle
+requests based on object prefix](https://aws.amazon.com/premiumsupport/knowledge-center/s3-request-limit-avoid-throttling/).
+Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits),
+but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the

Review comment:
       The `30-60 minute` that I am talking about is how S3 creates internal partitions. This is not the same as table partitions or folders what you are referring to. Checkout https://youtu.be/rHeTn9pHNKo?t=3290 which explain this a little bit.
   
   And yes, you are right that initially S3 will treat the common table prefix to have its fixed request limits. Now, as it seems more traffic across different prefixes under the common table prefix (because of request to different partitions) it may do internal partitioning (30 - 60 minutes) to scale request limits for each of these prefixes. But this scaling is not instantaneous. The video explains this.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org