You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/06/13 12:01:20 UTC

[jira] [Resolved] (SPARK-15919) DStream "saveAsTextFile" doesn't update the prefix after each checkpoint

     [ https://issues.apache.org/jira/browse/SPARK-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-15919.
-------------------------------
    Resolution: Not A Problem

No, this is simple to accomplish in Spark already. You need to use foreachRDD to get an RDD and timestamp, and use that in your call to saveAsTextFiles

> DStream "saveAsTextFile" doesn't update the prefix after each checkpoint
> ------------------------------------------------------------------------
>
>                 Key: SPARK-15919
>                 URL: https://issues.apache.org/jira/browse/SPARK-15919
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API
>    Affects Versions: 1.6.1
>         Environment: Amazon EMR
>            Reporter: Aamir Abbas
>
> I have a Spark streaming job that reads a data stream, and saves it as a text file after a predefined time interval. In the function 
> stream.dstream().repartition(1).saveAsTextFiles(getOutputPath(), "");
> The function getOutputPath() generates a new path every time the function is called, depending on the current system time.
> However, the output path prefix remains the same for all the batches, which effectively means that function is not called again for the next batch of the stream, although the files are being saved after each checkpoint interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org