You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/07/06 13:50:20 UTC

[GitHub] [hudi] bhasudha commented on issue #1798: Question reading partition path with less level is more faster than what document mentioned

bhasudha commented on issue #1798:
URL: https://github.com/apache/hudi/issues/1798#issuecomment-654251026


   > Document
   > 
   > ```
   > val hudiIncQueryDF = spark
   >      .read()
   >      .format("org.apache.hudi")
   >      .option(DataSourceReadOptions.QUERY_TYPE_OPT_KEY(), DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL())
   >      .load(tablePath + "/*") //The number of wildcard asterisks here must be one greater than the number of partition
   > ```
   > 
   > we have path like data/YYYY/MM/DD and when try as document mentioned
   > 
   > ```
   > spark.read.format("org.apache.hudi").load("s3://test/data/*/*/*/*")
   > // 4000+ files cost 60s
   > scala> res8.count
   > res9: Long = 313589086
   > ```
   > 
   > but when we test with
   > 
   > ```
   > spark.read.format("org.apache.hudi").load("s3://test/data/*/*/*")
   > // 600+ files cost 10s
   > scala> res10.count
   > res11: Long = 313589086
   > ```
   > 
   > result is the same, but with `s3://test/data/*/*/*` we could have much more fast speed.
   > and basically the the more file count the path included, the much more huge difference the time cost will be....
   > 
   > Is there any concern with using the path with less level of parquet file?
   
   @zherenyu831 Thanks for reaching out. Do you mind sharing what was your query on the table?  


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org