You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manish Kumar (JIRA)" <ji...@apache.org> on 2016/06/23 10:58:16 UTC
[jira] [Comment Edited] (SPARK-16169) Saving Intermediate dataframe
increasing processing time upto 5 times.
[ https://issues.apache.org/jira/browse/SPARK-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15346267#comment-15346267 ]
Manish Kumar edited comment on SPARK-16169 at 6/23/16 10:58 AM:
----------------------------------------------------------------
Hi [~srowen]
I am not saying it is taking 5 minutes longer rather I am saying it is taking 5 times more time.
All the jobs shown in the attached Spark UI is completed within 10 minutes but the spark application remains in running status for almost 50 minutes.
I am saving a dataframe which is again used in further processing.
I hope I am clear now.
Regards
Manish Kumar
was (Author: mkbond777):
Hi [~srowen]
I am not saying it is taking 5 minutes longer rather I am saying it is taking 5 times more time.
All the jobs shown in the attached Spark UI is completed within 10 minutes but the spark application remains in running status for almost 50 minutes.
I am saving a dataframe which is again used in further processing.
I hope I am clear now.
Regards
> Saving Intermediate dataframe increasing processing time upto 5 times.
> ----------------------------------------------------------------------
>
> Key: SPARK-16169
> URL: https://issues.apache.org/jira/browse/SPARK-16169
> Project: Spark
> Issue Type: Question
> Components: Spark Submit, Web UI
> Affects Versions: 1.6.1
> Environment: Amazon EMR
> Reporter: Manish Kumar
> Labels: performance
> Attachments: Spark-UI.png
>
>
> When a spark application is (written in scala) trying to save intermediate dataframe, the application is taking processing time almost 5 times.
> Although the spark-UI clearly shows that all jobs are completed but the spark application remains in running status.
> Below is the command for saving the intermediate output and then using the dataframe.
> {noformat}
> saveDataFrame(flushPath, flushFormat, isCoalesce, flushMode, previousDataFrame, sqlContext)
> previousDataFrame.count
> {noformat}
> Here, previousDataFrame is the result of the last step and saveDataFrame is just saving the DataFrame as given location, then the previousDataFrame will be used by next steps/transformation.
> Below is the spark UI screenshot which shows jobs completed although some task inside it are neither completed nor skipped.
> !Spark-UI.png!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org