You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Philip Doctor <ph...@physiq.com> on 2017/08/31 14:06:55 UTC

Sink -> Source

I have a few Flink jobs.  Several of them share the same code.  I was wondering if I could make those shared steps their own job and then specify that the sink for one process was the source for another process, stiching my jobs together.  Is this possible ? I didn’t see it in the docs.. It feels like I could possibly hack something together with writeToSocket() on my data stream and then create a source that reads from a socket, but I was hoping there was a more fully baked solution to this.

Thanks for your time.

Re: Sink -> Source

Posted by Nico Kruber <ni...@data-artisans.com>.
Hi Philipp,
afaik, Flink doesn't offer this out-of-the-box. You could either hack 
something as suggested or use Kafka to glue different jobs together.

Both may affect exactly/at-least once guarantees, however. Also refer to
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/
guarantees.html


Nico

On Thursday, 31 August 2017 16:06:55 CEST Philip Doctor wrote:
> I have a few Flink jobs.  Several of them share the same code.  I was
> wondering if I could make those shared steps their own job and then specify
> that the sink for one process was the source for another process, stiching
> my jobs together.  Is this possible ? I didn’t see it in the docs.. It
> feels like I could possibly hack something together with writeToSocket() on
> my data stream and then create a source that reads from a socket, but I was
> hoping there was a more fully baked solution to this.
 
> Thanks for your time.