You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Vinoth Chandar (Jira)" <ji...@apache.org> on 2020/01/21 03:17:00 UTC
[jira] [Comment Edited] (HUDI-84) Benchmark write/read paths on
Hudi vs non-Hudi datasets
[ https://issues.apache.org/jira/browse/HUDI-84?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17019830#comment-17019830 ]
Vinoth Chandar edited comment on HUDI-84 at 1/21/20 3:16 AM:
-------------------------------------------------------------
Simple command to reproduce the spark df/rdd conversion issue
cc [~uditme] [~nishith29] [~vbalaji] if we can find some simple solution for this, that would be great. this affects only the datasource write path (deltastreamer/rdd are fine) . Uploaded files with the stage UI.. you can see the additional compute overhead..
{code:java}
val df = spark.read.parquet("file:///tmp/hudi-benchmark/input/*/*.parquet")// some input data
df.write.format("parquet").mode("overwrite").save("file:///tmp/parquet-write")
val schema = df.schema
val encoder = org.apache.spark.sql.catalyst.encoders.RowEncoder.apply(schema).resolveAndBind()
val df2 = spark.createDataFrame(df.queryExecution.toRdd.map(encoder.fromRow), schema)
df2.write.format("parquet").mode("overwrite").save("file:///tmp/parquet-write")
{code}
was (Author: vc):
Simple command to reproduce the spark df/rdd conversion issue
cc [~uditme] [~nishith29] [~vbalaji] if we can find some simple solution for this, that would be great. this affects only the datasource write path (deltastreamer/rdd are fine)
{code:java}
val df = spark.read.parquet("file:///tmp/hudi-benchmark/input/*/*.parquet")// some input data
df.write.format("parquet").mode("overwrite").save("file:///tmp/parquet-write")
val schema = df.schema
val encoder = org.apache.spark.sql.catalyst.encoders.RowEncoder.apply(schema).resolveAndBind()
val df2 = spark.createDataFrame(df.queryExecution.toRdd.map(encoder.fromRow), schema)
df2.write.format("parquet").mode("overwrite").save("file:///tmp/parquet-write")
{code}
> Benchmark write/read paths on Hudi vs non-Hudi datasets
> -------------------------------------------------------
>
> Key: HUDI-84
> URL: https://issues.apache.org/jira/browse/HUDI-84
> Project: Apache Hudi (incubating)
> Issue Type: Test
> Components: Performance
> Reporter: Vinoth Chandar
> Assignee: Vinoth Chandar
> Priority: Major
> Labels: realtime-data-lakes
> Attachments: df-toRdd-write.pdf, df-write-stage.pdf
>
>
> * Index performance
> * SparkSQL (https://github.com/apache/incubator-hudi/issues/588#issuecomment-468055059)
> * Query planning Planning
> * Bulk_insert, log ingest
> * upsert, database change log.
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)