You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Abrar Sheikh <ab...@gmail.com> on 2019/09/18 16:56:11 UTC

changing flink/kafka configs for stateful flink streaming applications

Hey all,

One of the known things with Spark Stateful Streaming application is that
we cannot alter Spark Configurations or Kafka Configurations after the
first run of the stateful streaming application, this has been explained
well in
https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/

Is this also something Stateful Flink Application share in common with
Spark?

Thanks,

-- 
Abrar Sheikh

Re: changing flink/kafka configs for stateful flink streaming applications

Posted by Abrar Sheikh <ab...@gmail.com>.
Thank you for the clarification.

On Fri, Sep 20, 2019 at 6:59 AM Fabian Hueske <fh...@gmail.com> wrote:

> Hi,
>
> It depends.
>
> There are many things that can be changed. A savepoint in Flink contains
> only the state of the application and not the configuration of the system.
> So an application can be migrated to another cluster that runs with a
> different configuration.
> There are some exceptions like the configuration of the default state
> backend (in case it is not configured in the application itself) and the
> checkpoint techniques.
>
> If it is about the configuration of the application itself (and not the
> system), you can do a lot of things in Flink.
> You can even implement the application in a way that it reconfigures
> itself while it is running.
>
> Since the last release (Flink 1.9), Flink features the Savepoint Processor
> API which allows to create or modify savepoints with a batch program.
> This can be used to adjust or bootstrap savepoints.
>
> Best, Fabian
>
>
> Am Mi., 18. Sept. 2019 um 18:56 Uhr schrieb Abrar Sheikh <
> abrar2002as@gmail.com>:
>
>> Hey all,
>>
>> One of the known things with Spark Stateful Streaming application is that
>> we cannot alter Spark Configurations or Kafka Configurations after the
>> first run of the stateful streaming application, this has been explained
>> well in
>> https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/
>>
>> Is this also something Stateful Flink Application share in common with
>> Spark?
>>
>> Thanks,
>>
>> --
>> Abrar Sheikh
>>
>

-- 
Abrar Sheikh

Re: changing flink/kafka configs for stateful flink streaming applications

Posted by Fabian Hueske <fh...@gmail.com>.
Hi,

It depends.

There are many things that can be changed. A savepoint in Flink contains
only the state of the application and not the configuration of the system.
So an application can be migrated to another cluster that runs with a
different configuration.
There are some exceptions like the configuration of the default state
backend (in case it is not configured in the application itself) and the
checkpoint techniques.

If it is about the configuration of the application itself (and not the
system), you can do a lot of things in Flink.
You can even implement the application in a way that it reconfigures itself
while it is running.

Since the last release (Flink 1.9), Flink features the Savepoint Processor
API which allows to create or modify savepoints with a batch program.
This can be used to adjust or bootstrap savepoints.

Best, Fabian


Am Mi., 18. Sept. 2019 um 18:56 Uhr schrieb Abrar Sheikh <
abrar2002as@gmail.com>:

> Hey all,
>
> One of the known things with Spark Stateful Streaming application is that
> we cannot alter Spark Configurations or Kafka Configurations after the
> first run of the stateful streaming application, this has been explained
> well in
> https://www.linkedin.com/pulse/upgrading-running-spark-streaming-application-code-changes-prakash/
>
> Is this also something Stateful Flink Application share in common with
> Spark?
>
> Thanks,
>
> --
> Abrar Sheikh
>