You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "xushiyan (via GitHub)" <gi...@apache.org> on 2023/03/26 00:30:53 UTC

[GitHub] [hudi] xushiyan opened a new pull request, #8290: [HUDI-5983] Improve loading data via cloud store incr source

xushiyan opened a new pull request, #8290:
URL: https://github.com/apache/hudi/pull/8290

   ### Change Logs
   
   Create proper spark partitions for cloud incr source to load data.
   
   ### Impact
   
   DeltaStreamer using S3EventsHoodieIncrSource and GcsEventsHoodieIncrSource
   
   ### Risk level
   
   Low.
   
   ### Documentation Update
   
   NA
   
   ### Contributor's checklist
   
   - [ ] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "nsivabalan (via GitHub)" <gi...@apache.org>.
nsivabalan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1155545875


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/gcs/GcsObjectsFetcher.java:
##########
@@ -21,29 +21,33 @@
 import org.apache.hudi.common.config.SerializableConfiguration;
 import org.apache.hudi.common.config.TypedProperties;
 import org.apache.hudi.common.util.Option;
-import org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon;
+import org.apache.hudi.utilities.sources.helpers.CloudObject;
+
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
 import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Encoders;
 import org.apache.spark.sql.Row;
 
 import java.io.Serializable;
 import java.util.List;
+
 import static org.apache.hudi.common.util.StringUtils.isNullOrEmpty;
+import static org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon.getCloudObjectsPerPartition;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.CLOUD_DATAFILE_EXTENSION;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_PREFIX;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_SUBSTR;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.SELECT_RELATIVE_PATH_PREFIX;
 
 /**
- * Extracts a list of fully qualified GCS filepaths from a given Spark Dataset as input.
+ * Extracts a list of GCS {@link CloudObject} containing filepaths from a given Spark Dataset as input.
  * Optionally:
  * i) Match the filename and path against provided input filter strings
  * ii) Check if each file exists on GCS, in which case it assumes SparkContext is already
  * configured with GCS options through GcsEventsHoodieIncrSource.addGcsAccessConfs().
  */
-public class FilePathsFetcher implements Serializable {
+public class GcsObjectsFetcher implements Serializable {

Review Comment:
   sg. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1485655349

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1483983159

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 30d38f447c7c004e33174c09337fa2de8bfce8f9 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493699313

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 9908244895ed25da794a6ccfa8e31aa6b438bf1c Azure: [CANCELED](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078) 
   * a75b548f5f37c8a8f40ed70dfecb9eab89088ee1 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081) 
   * 5424663e26cbc13c38807304a82ccf0618ae1403 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] codope commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "codope (via GitHub)" <gi...@apache.org>.
codope commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1148822988


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;

Review Comment:
   Should we be more conservative? Is 10% enough?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1148936079


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;

Review Comment:
   it's only for estimation because there can be a lot of records with small payload or much less records with large payload. Or the source data is also hudi records. so it's hard to put 1 number that works for all.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1485444000

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 30d38f447c7c004e33174c09337fa2de8bfce8f9 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920) 
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493695164

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 9908244895ed25da794a6ccfa8e31aa6b438bf1c Azure: [CANCELED](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078) 
   * a75b548f5f37c8a8f40ed70dfecb9eab89088ee1 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493653398

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944) 
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1155478537


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +139,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()

Review Comment:
   yup this was copied from existing one. likely it's triggering DAG. added the check. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1494676688

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084",
       "triggerID" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "triggerType" : "PUSH"
     }, {
       "hash" : "31333ed0759fd27e5f2ebf88b5bd79573d69c47e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16098",
       "triggerID" : "31333ed0759fd27e5f2ebf88b5bd79573d69c47e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 5424663e26cbc13c38807304a82ccf0618ae1403 Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084) 
   * 31333ed0759fd27e5f2ebf88b5bd79573d69c47e Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16098) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1483965658

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 30d38f447c7c004e33174c09337fa2de8bfce8f9 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1483964612

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 30d38f447c7c004e33174c09337fa2de8bfce8f9 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493661754

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944) 
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 9908244895ed25da794a6ccfa8e31aa6b438bf1c Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078) 
   * a75b548f5f37c8a8f40ed70dfecb9eab89088ee1 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1494260614

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084",
       "triggerID" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 5424663e26cbc13c38807304a82ccf0618ae1403 Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1155478862


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/gcs/GcsObjectsFetcher.java:
##########
@@ -21,29 +21,33 @@
 import org.apache.hudi.common.config.SerializableConfiguration;
 import org.apache.hudi.common.config.TypedProperties;
 import org.apache.hudi.common.util.Option;
-import org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon;
+import org.apache.hudi.utilities.sources.helpers.CloudObject;
+
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
 import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Encoders;
 import org.apache.spark.sql.Row;
 
 import java.io.Serializable;
 import java.util.List;
+
 import static org.apache.hudi.common.util.StringUtils.isNullOrEmpty;
+import static org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon.getCloudObjectsPerPartition;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.CLOUD_DATAFILE_EXTENSION;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_PREFIX;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_SUBSTR;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.SELECT_RELATIVE_PATH_PREFIX;
 
 /**
- * Extracts a list of fully qualified GCS filepaths from a given Spark Dataset as input.
+ * Extracts a list of GCS {@link CloudObject} containing filepaths from a given Spark Dataset as input.
  * Optionally:
  * i) Match the filename and path against provided input filter strings
  * ii) Check if each file exists on GCS, in which case it assumes SparkContext is already
  * configured with GCS options through GcsEventsHoodieIncrSource.addGcsAccessConfs().
  */
-public class FilePathsFetcher implements Serializable {
+public class GcsObjectsFetcher implements Serializable {

Review Comment:
   Renamed CloudObject to CloudObjectMetadata, and hence we have `GcsObjectMetadataFetcher` and `GcsObjectDataFetcher`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "nsivabalan (via GitHub)" <gi...@apache.org>.
nsivabalan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1152618564


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/gcs/GcsObjectsFetcher.java:
##########
@@ -21,29 +21,33 @@
 import org.apache.hudi.common.config.SerializableConfiguration;
 import org.apache.hudi.common.config.TypedProperties;
 import org.apache.hudi.common.util.Option;
-import org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon;
+import org.apache.hudi.utilities.sources.helpers.CloudObject;
+
 import org.apache.log4j.LogManager;
 import org.apache.log4j.Logger;
 import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Encoders;
 import org.apache.spark.sql.Row;
 
 import java.io.Serializable;
 import java.util.List;
+
 import static org.apache.hudi.common.util.StringUtils.isNullOrEmpty;
+import static org.apache.hudi.utilities.sources.helpers.CloudObjectsSelectorCommon.getCloudObjectsPerPartition;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.CLOUD_DATAFILE_EXTENSION;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_PREFIX;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.IGNORE_RELATIVE_PATH_SUBSTR;
 import static org.apache.hudi.utilities.sources.helpers.CloudStoreIngestionConfig.SELECT_RELATIVE_PATH_PREFIX;
 
 /**
- * Extracts a list of fully qualified GCS filepaths from a given Spark Dataset as input.
+ * Extracts a list of GCS {@link CloudObject} containing filepaths from a given Spark Dataset as input.
  * Optionally:
  * i) Match the filename and path against provided input filter strings
  * ii) Check if each file exists on GCS, in which case it assumes SparkContext is already
  * configured with GCS options through GcsEventsHoodieIncrSource.addGcsAccessConfs().
  */
-public class FilePathsFetcher implements Serializable {
+public class GcsObjectsFetcher implements Serializable {

Review Comment:
   GcsObjectsFetcher and GcsObjectsDataFetcher is still confusing a bit. 
   can we name these as 
   GcsObjectsPathFetcher
   GcsObjectsDataFetcher



##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +139,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;
+    long parquetMaxFileSize = props.getLong(PARQUET_MAX_FILE_SIZE.key(), Long.parseLong(PARQUET_MAX_FILE_SIZE.defaultValue()));
+    int numPartitions = (int) Math.max(totalSize / parquetMaxFileSize, 1);
+    return Option.of(reader.load(paths.toArray(new String[cloudObjects.size()])).coalesce(numPartitions));

Review Comment:
   don't we need to do repartition here? 
   coalesce [may not increase](https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.DataFrame.coalesce.html) the partitions.
   for eg, if we have one file from S3 which is 1GB, we want to repartition so that we get 10 spark partitions. 
   
   may be we can optimize based on total size and max Parquet size. if its less we can do coalesce, if its higher, we can do repartition. 
   
   



##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +139,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()

Review Comment:
   do you think we should do Logger.isDebugEnabled() ? 
   since this might trigger the dag, just want to be cautious



##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;

Review Comment:
   I feel we can't we can do 10%. our meta fields overhead is not proportional to payload size. 



##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;

Review Comment:
   we can get avg record size from latest commit metadata right? is that doable ? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1485455503

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 30d38f447c7c004e33174c09337fa2de8bfce8f9 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920) 
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1494664698

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084",
       "triggerID" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "triggerType" : "PUSH"
     }, {
       "hash" : "31333ed0759fd27e5f2ebf88b5bd79573d69c47e",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "31333ed0759fd27e5f2ebf88b5bd79573d69c47e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 5424663e26cbc13c38807304a82ccf0618ae1403 Azure: [FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084) 
   * 31333ed0759fd27e5f2ebf88b5bd79573d69c47e UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493657154

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b46d29c20b41e99cf8ff217f8f48ae4155574067 Azure: [SUCCESS](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944) 
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * 9908244895ed25da794a6ccfa8e31aa6b438bf1c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1154118452


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +139,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;
+    long parquetMaxFileSize = props.getLong(PARQUET_MAX_FILE_SIZE.key(), Long.parseLong(PARQUET_MAX_FILE_SIZE.defaultValue()));
+    int numPartitions = (int) Math.max(totalSize / parquetMaxFileSize, 1);
+    return Option.of(reader.load(paths.toArray(new String[cloudObjects.size()])).coalesce(numPartitions));

Review Comment:
   as discussed, coalesce won't make partition number go up. the main purpose is to avoid small files / too many partitions case, so coalesce is meant for it. besides, it avoid full shuffle like what repartition does, so it's preferred.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on a diff in pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on code in PR #8290:
URL: https://github.com/apache/hudi/pull/8290#discussion_r1154116735


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/CloudObjectsSelectorCommon.java:
##########
@@ -115,4 +129,41 @@ private static boolean checkIfFileExists(String storageUrlSchemePrefix, String b
       throw new HoodieIOException(errMsg, ioe);
     }
   }
+
+  public static Option<Dataset<Row>> loadAsDataset(SparkSession spark, List<CloudObject> cloudObjects, TypedProperties props, String fileFormat) {
+    LOG.debug("Extracted distinct files " + cloudObjects.size()
+        + " and some samples " + cloudObjects.stream().map(CloudObject::getPath).limit(10).collect(Collectors.toList()));
+
+    if (isNullOrEmpty(cloudObjects)) {
+      return Option.empty();
+    }
+    DataFrameReader reader = spark.read().format(fileFormat);
+    String datasourceOpts = props.getString(SPARK_DATASOURCE_OPTIONS, null);
+    if (StringUtils.isNullOrEmpty(datasourceOpts)) {
+      // fall back to legacy config for BWC. TODO consolidate in HUDI-5780
+      datasourceOpts = props.getString(S3EventsHoodieIncrSource.Config.SPARK_DATASOURCE_OPTIONS, null);
+    }
+    if (StringUtils.nonEmpty(datasourceOpts)) {
+      final ObjectMapper mapper = new ObjectMapper();
+      Map<String, String> sparkOptionsMap = null;
+      try {
+        sparkOptionsMap = mapper.readValue(datasourceOpts, Map.class);
+      } catch (IOException e) {
+        throw new HoodieException(String.format("Failed to parse sparkOptions: %s", datasourceOpts), e);
+      }
+      LOG.info(String.format("sparkOptions loaded: %s", sparkOptionsMap));
+      reader = reader.options(sparkOptionsMap);
+    }
+    List<String> paths = new ArrayList<>();
+    long totalSize = 0;
+    for (CloudObject o: cloudObjects) {
+      paths.add(o.getPath());
+      totalSize += o.getSize();
+    }
+    // inflate 10% for potential hoodie meta fields
+    totalSize *= 1.1;

Review Comment:
   as discussed, this is just estimation.
   - input data files are usually vanilla parquet or other formats without hudi meta fields, in this case, 10% is some estimation for large record size. for small record size where metafields could take 80%, this 10% buffer won't make things worse
   - in rare case input data files are hudi parquet, this 10% buffer won't make it too bad comparing to the accurate estimation (0%)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1495220009

   ![Screenshot 2023-04-03 at 8 41 39 PM](https://user-images.githubusercontent.com/2701446/229664578-243eaafc-a52f-4e05-b0b1-1f2f4af07e08.png)
   
   CI passed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] hudi-bot commented on pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "hudi-bot (via GitHub)" <gi...@apache.org>.
hudi-bot commented on PR #8290:
URL: https://github.com/apache/hudi/pull/8290#issuecomment-1493704527

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15920",
       "triggerID" : "30d38f447c7c004e33174c09337fa2de8bfce8f9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=15944",
       "triggerID" : "b46d29c20b41e99cf8ff217f8f48ae4155574067",
       "triggerType" : "PUSH"
     }, {
       "hash" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "aca248bbeccad4ef90d4c2566da9171e042877bc",
       "triggerType" : "PUSH"
     }, {
       "hash" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16078",
       "triggerID" : "9908244895ed25da794a6ccfa8e31aa6b438bf1c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081",
       "triggerID" : "a75b548f5f37c8a8f40ed70dfecb9eab89088ee1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084",
       "triggerID" : "5424663e26cbc13c38807304a82ccf0618ae1403",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * aca248bbeccad4ef90d4c2566da9171e042877bc UNKNOWN
   * a75b548f5f37c8a8f40ed70dfecb9eab89088ee1 Azure: [CANCELED](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16081) 
   * 5424663e26cbc13c38807304a82ccf0618ae1403 Azure: [PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=16084) 
   
   <details>
   <summary>Bot commands</summary>
     @hudi-bot supports the following commands:
   
    - `@hudi-bot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] xushiyan merged pull request #8290: [HUDI-5983] Improve loading data via cloud store incr source

Posted by "xushiyan (via GitHub)" <gi...@apache.org>.
xushiyan merged PR #8290:
URL: https://github.com/apache/hudi/pull/8290


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org