You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Guillermo Ortiz <ko...@gmail.com> on 2014/09/12 16:45:29 UTC

Fwd: Define the name of the outputs with Java-Spark.

I would like to define the names of my output in Spark, I have a process
which write many fails and I would like to name them, is it possible? I
guess that it's not possible with saveAsText method.

It would be something similar to the MultipleOutput of Hadoop.

Re: Define the name of the outputs with Java-Spark.

Posted by Xiangrui Meng <me...@gmail.com>.
Spark doesn't support MultipleOutput at this time. You can cache the
parent RDD. Then create RDDs from it and save them separately.
-Xiangrui

On Fri, Sep 12, 2014 at 7:45 AM, Guillermo Ortiz <ko...@gmail.com> wrote:
>
> I would like to define the names of my output in Spark, I have a process
> which write many fails and I would like to name them, is it possible? I
> guess that it's not possible with saveAsText method.
>
> It would be something similar to the MultipleOutput of Hadoop.
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org