You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Anand Chandrashekar <an...@gmail.com> on 2017/10/13 15:41:20 UTC

Kafka 010 Spark 2.2.0 Streaming / Custom checkpoint strategy

Greetings!

I would like to accomplish a custom kafka checkpoint strategy (instead of
hdfs, i would like to use redis). is there a strategy I can use to change
this behavior; any advise will help. Thanks!

Regards,
Anand.

Re: Kafka 010 Spark 2.2.0 Streaming / Custom checkpoint strategy

Posted by Jörn Franke <jo...@gmail.com>.
HDFS can be r placed by other filesystem plugins (eg ignitefs, s3, etc) so the easiest is to write a file system plugin. This is not a plug-in for Spark but part of the Hadoop functionality used by Spark.

> On 13. Oct 2017, at 17:41, Anand Chandrashekar <an...@gmail.com> wrote:
> 
> Greetings!
> 
> I would like to accomplish a custom kafka checkpoint strategy (instead of hdfs, i would like to use redis). is there a strategy I can use to change this behavior; any advise will help. Thanks!
> 
> Regards,
> Anand.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org