You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Deniz Koçak <le...@gmail.com> on 2021/12/02 14:28:33 UTC

Stateful functions module configurations (module.yaml) per deployment environment

Hi,

We have a simple stateful-function job, consuming from Kafka, calling
an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
the result back via Kafka again.

* We created a jar file to be deployed on a standalone cluster (it's
not a docker Image), therefore we add `statefun-flink-distribution`
version 3.0.0 as a dependency in that jar file.
* Entry class in our job configuration is
`org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
simply keep a single module.yaml file in resources folder for the
module configuration.

My question here is, we would like to deploy that jar to different
environments (dev. and prod.) and not sure how we can pass different
module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
to the job during startup without creating separate jar files for
different environments?

Thanks,
Deniz

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Igal Shilman <ig...@apache.org>.
Hello Deniz,
Glad to hear that it worked for you! as this is a feature that might
benefit others in the community I've just merged this to our main branch,
which means that feature releases will have that feature :-)
Currently there are no plans to backport this to 3.1 however.

Cheers,
Igal.


On Mon, Dec 13, 2021 at 3:33 PM Deniz Koçak <le...@gmail.com> wrote:

> Hello Igal,
>
> First of all, thanks for your effort and feedback on that issue.
> We followed the steps you specified and it seems to be working, in
> order to briefly summarize what we have done (nothing different
> actually you specified on your e-mail)
>
> 1) Compiled `statefun-flink-distribution` artifact by using your
> branch and added that custom build as a dependency to our project.
> 2) Used the module definition format here :
>
> https://nightlies.apache.org/flink/flink-statefun-docs-master/docs/modules/overview/
> 3) We put our env. specific module definition to s3 and simply add the
> `additionalDependencies` on Ververica UI as below:
>
> spec:
>       artifact:
>         additionalDependencies:
>           -
> 's3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module-drk.yaml'
>         entryClass:
> org.apache.flink.statefun.flink.core.StatefulFunctionsJob
>
> We also interested in if that feature will be a part of next release
> and is there any plan/possibility to backport it to a previous version
> like 3.1? At the moment, we are planning to keep this custom build in
> our own repo, but it would be very handy to see that change in an
> official release.
>
> Thanks,
> Deniz
>
> On Thu, Dec 9, 2021 at 2:39 PM Igal Shilman <ig...@apache.org> wrote:
> >
> > Hello Deniz,
> >
> > Looking at /flink/usrlib and the way it is expected to be used, Flink
> will only pick up .jar files and include
> > them into the classpath, so unfortunately it is being excluded.
> > If you want it to just make it work and get on with your day, you can
> simply place module.yaml in a separate JAR , otherwise keep on reading :-)
> >
> > I've created a branch[1], that supports providing a custom name for the
> module.yaml, if you are comfortable with building this branch and trying it
> out, I can go forward with adding this to statefun, as I believe others
> might need a similar functionality.
> >
> > To make it work you need:
> >
> > 1. Use @Fabian Paul's advice and upload your custom module.yaml as you
> did before, you can also rename it now to whatever name you want. For
> example prod.yaml.
> > 2. This file will appear at /flink/usrlib/prod.yaml
> > 3. You would also need to specify this file by adding the following to
> your flink-conf.yaml:
> >
> > statefun.remote.module-name: /flink/usrlib/prod.yaml
> >
> > 4. At the bottom of this page [2] you can see a full example, and how to
> add additional flink configurations (flinkConfiguration) property.
> >
> > I hope this helps,
> > Igal.
> >
> > [1] https://github.com/igalshilman/flink-statefun/tree/custom_module
> > [2]
> https://docs.ververica.com/user_guide/application_operations/deployments/index.html#deployment-defaults
> >
> >
> > On Thu, Dec 9, 2021 at 12:22 PM Deniz Koçak <le...@gmail.com> wrote:
> >>
> >> Hi Fabian,
> >>
> >> Thanks for that solution.. I've removed the module.yaml file from the
> >> jar file assuming that it should be fetched from s3 and used by the
> >> job. I've tried this on our job, but still its seems to be failing.
> >>
> >> From the logs module.yaml file seems to be fetched from s3 bucket.
> >> ----
> >> com.ververica.platform.artifactfetcher.ArtifactFetcherEntryPoint -
> >> Finished fetching
> >> s3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module.yaml into
> >> /flink/usrlib/module.yaml
> >> ----
> >>
> >> However we got that exception below:
> >>
> >> ----
> >> Caused by: java.lang.IllegalStateException: There are no ingress
> >> defined. at
> org.apache.flink.statefun.flink.core.StatefulFunctionsUniverseValidator.validate(StatefulFunctionsUniverseValidator.java:25)
> >> ~[?:?]
> >> ----
> >>
> >> Please let me know if you need further information. Thanks again for
> your help.
> >>
> >> Deniz
> >>
> >> On Wed, Dec 8, 2021 at 1:20 PM Fabian Paul <fp...@apache.org> wrote:
> >> >
> >> > Hi Deniz,
> >> >
> >> > Great to hear from someone using Ververica Platform with StateFun.
> >> > When deploying your job you can specify `additionalConfigurations`[1]
> >> > that are also pulled and put into the classpath.
> >> >
> >> > Hopefully, that is suitable for your scenario.
> >> >
> >> > Best,
> >> > Fabian
> >> >
> >> > [1]
> https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies
> >> >
> >> > On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <le...@gmail.com> wrote:
> >> > >
> >> > > Hi Igal,
> >> > >
> >> > > We are using official images from Ververica as the Flink
> installation.
> >> > > Actually, I was hoping to specify the name of file names to use
> during
> >> > > the runtime via `mainArgs` in the deployment configuration (or any
> >> > > other way may be). By this way we can specify the target yaml files,
> >> > > but I think this is not possible?
> >> > >
> >> > > =======================
> >> > > kind: JAR
> >> > > mainArgs: '--active-profile nxt'
> >> > > =======================
> >> > >
> >> > > Therefore, it's easier to use single jar in our pipelines instead of
> >> > > creating a different jar file for each env. (at least for
> development
> >> > > and production).
> >> > >
> >> > > For solution 2, you refer flink distro. , like /flink/lib folder in
> >> > > the official Docker image?
> >> > >
> >> > > Thanks,
> >> > > Deniz
> >> > >
> >> > > On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org>
> wrote:
> >> > > >
> >> > > > Hi Deniz,
> >> > > >
> >> > > > StateFun would be looking for module.yaml(s) in the classpath.
> >> > > > If you are submitting the job to an existing Flink cluster this
> really means that it needs to be either:
> >> > > > 1. packaged with the jar (like you are already doing)
> >> > > > 2. be present at the classpath, this means that you can place
> your module.yaml at the /lib directory of your Flink installation, I
> suppose that you have different installations in different environments.
> >> > > >
> >> > > > I'm not aware of a way to submit any additional files with the
> jar via the flink cli, but perhaps someone else can chime in :-)
> >> > > >
> >> > > > Cheers,
> >> > > > Igal.
> >> > > >
> >> > > >
> >> > > > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com>
> wrote:
> >> > > >>
> >> > > >> Hi,
> >> > > >>
> >> > > >> We have a simple stateful-function job, consuming from Kafka,
> calling
> >> > > >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and
> publishing
> >> > > >> the result back via Kafka again.
> >> > > >>
> >> > > >> * We created a jar file to be deployed on a standalone cluster
> (it's
> >> > > >> not a docker Image), therefore we add
> `statefun-flink-distribution`
> >> > > >> version 3.0.0 as a dependency in that jar file.
> >> > > >> * Entry class in our job configuration is
> >> > > >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and
> we
> >> > > >> simply keep a single module.yaml file in resources folder for the
> >> > > >> module configuration.
> >> > > >>
> >> > > >> My question here is, we would like to deploy that jar to
> different
> >> > > >> environments (dev. and prod.) and not sure how we can pass
> different
> >> > > >> module configurations (module.yaml or
> module_nxt.yaml/module_prd.yaml)
> >> > > >> to the job during startup without creating separate jar files for
> >> > > >> different environments?
> >> > > >>
> >> > > >> Thanks,
> >> > > >> Deniz
>

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Deniz Koçak <le...@gmail.com>.
Hello Igal,

First of all, thanks for your effort and feedback on that issue.
We followed the steps you specified and it seems to be working, in
order to briefly summarize what we have done (nothing different
actually you specified on your e-mail)

1) Compiled `statefun-flink-distribution` artifact by using your
branch and added that custom build as a dependency to our project.
2) Used the module definition format here :
https://nightlies.apache.org/flink/flink-statefun-docs-master/docs/modules/overview/
3) We put our env. specific module definition to s3 and simply add the
`additionalDependencies` on Ververica UI as below:

spec:
      artifact:
        additionalDependencies:
          - 's3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module-drk.yaml'
        entryClass: org.apache.flink.statefun.flink.core.StatefulFunctionsJob

We also interested in if that feature will be a part of next release
and is there any plan/possibility to backport it to a previous version
like 3.1? At the moment, we are planning to keep this custom build in
our own repo, but it would be very handy to see that change in an
official release.

Thanks,
Deniz

On Thu, Dec 9, 2021 at 2:39 PM Igal Shilman <ig...@apache.org> wrote:
>
> Hello Deniz,
>
> Looking at /flink/usrlib and the way it is expected to be used, Flink will only pick up .jar files and include
> them into the classpath, so unfortunately it is being excluded.
> If you want it to just make it work and get on with your day, you can simply place module.yaml in a separate JAR , otherwise keep on reading :-)
>
> I've created a branch[1], that supports providing a custom name for the module.yaml, if you are comfortable with building this branch and trying it out, I can go forward with adding this to statefun, as I believe others might need a similar functionality.
>
> To make it work you need:
>
> 1. Use @Fabian Paul's advice and upload your custom module.yaml as you did before, you can also rename it now to whatever name you want. For example prod.yaml.
> 2. This file will appear at /flink/usrlib/prod.yaml
> 3. You would also need to specify this file by adding the following to your flink-conf.yaml:
>
> statefun.remote.module-name: /flink/usrlib/prod.yaml
>
> 4. At the bottom of this page [2] you can see a full example, and how to add additional flink configurations (flinkConfiguration) property.
>
> I hope this helps,
> Igal.
>
> [1] https://github.com/igalshilman/flink-statefun/tree/custom_module
> [2] https://docs.ververica.com/user_guide/application_operations/deployments/index.html#deployment-defaults
>
>
> On Thu, Dec 9, 2021 at 12:22 PM Deniz Koçak <le...@gmail.com> wrote:
>>
>> Hi Fabian,
>>
>> Thanks for that solution.. I've removed the module.yaml file from the
>> jar file assuming that it should be fetched from s3 and used by the
>> job. I've tried this on our job, but still its seems to be failing.
>>
>> From the logs module.yaml file seems to be fetched from s3 bucket.
>> ----
>> com.ververica.platform.artifactfetcher.ArtifactFetcherEntryPoint -
>> Finished fetching
>> s3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module.yaml into
>> /flink/usrlib/module.yaml
>> ----
>>
>> However we got that exception below:
>>
>> ----
>> Caused by: java.lang.IllegalStateException: There are no ingress
>> defined. at org.apache.flink.statefun.flink.core.StatefulFunctionsUniverseValidator.validate(StatefulFunctionsUniverseValidator.java:25)
>> ~[?:?]
>> ----
>>
>> Please let me know if you need further information. Thanks again for your help.
>>
>> Deniz
>>
>> On Wed, Dec 8, 2021 at 1:20 PM Fabian Paul <fp...@apache.org> wrote:
>> >
>> > Hi Deniz,
>> >
>> > Great to hear from someone using Ververica Platform with StateFun.
>> > When deploying your job you can specify `additionalConfigurations`[1]
>> > that are also pulled and put into the classpath.
>> >
>> > Hopefully, that is suitable for your scenario.
>> >
>> > Best,
>> > Fabian
>> >
>> > [1] https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies
>> >
>> > On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <le...@gmail.com> wrote:
>> > >
>> > > Hi Igal,
>> > >
>> > > We are using official images from Ververica as the Flink installation.
>> > > Actually, I was hoping to specify the name of file names to use during
>> > > the runtime via `mainArgs` in the deployment configuration (or any
>> > > other way may be). By this way we can specify the target yaml files,
>> > > but I think this is not possible?
>> > >
>> > > =======================
>> > > kind: JAR
>> > > mainArgs: '--active-profile nxt'
>> > > =======================
>> > >
>> > > Therefore, it's easier to use single jar in our pipelines instead of
>> > > creating a different jar file for each env. (at least for development
>> > > and production).
>> > >
>> > > For solution 2, you refer flink distro. , like /flink/lib folder in
>> > > the official Docker image?
>> > >
>> > > Thanks,
>> > > Deniz
>> > >
>> > > On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org> wrote:
>> > > >
>> > > > Hi Deniz,
>> > > >
>> > > > StateFun would be looking for module.yaml(s) in the classpath.
>> > > > If you are submitting the job to an existing Flink cluster this really means that it needs to be either:
>> > > > 1. packaged with the jar (like you are already doing)
>> > > > 2. be present at the classpath, this means that you can place your module.yaml at the /lib directory of your Flink installation, I suppose that you have different installations in different environments.
>> > > >
>> > > > I'm not aware of a way to submit any additional files with the jar via the flink cli, but perhaps someone else can chime in :-)
>> > > >
>> > > > Cheers,
>> > > > Igal.
>> > > >
>> > > >
>> > > > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com> wrote:
>> > > >>
>> > > >> Hi,
>> > > >>
>> > > >> We have a simple stateful-function job, consuming from Kafka, calling
>> > > >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
>> > > >> the result back via Kafka again.
>> > > >>
>> > > >> * We created a jar file to be deployed on a standalone cluster (it's
>> > > >> not a docker Image), therefore we add `statefun-flink-distribution`
>> > > >> version 3.0.0 as a dependency in that jar file.
>> > > >> * Entry class in our job configuration is
>> > > >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
>> > > >> simply keep a single module.yaml file in resources folder for the
>> > > >> module configuration.
>> > > >>
>> > > >> My question here is, we would like to deploy that jar to different
>> > > >> environments (dev. and prod.) and not sure how we can pass different
>> > > >> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
>> > > >> to the job during startup without creating separate jar files for
>> > > >> different environments?
>> > > >>
>> > > >> Thanks,
>> > > >> Deniz

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Igal Shilman <ig...@apache.org>.
Hello Deniz,

Looking at /flink/usrlib and the way it is expected to be used, Flink will
only pick up .jar files and include
them into the classpath, so unfortunately it is being excluded.
If you want it to just make it work and get on with your day, you can
simply place module.yaml in a separate JAR , otherwise keep on reading :-)

I've created a branch[1], that supports providing a custom name for the
module.yaml, if you are comfortable with building this branch and trying it
out, I can go forward with adding this to statefun, as I believe others
might need a similar functionality.

To make it work you need:

1. Use @Fabian Paul <fa...@gmail.com>'s advice and upload your
custom module.yaml as you did before, you can also rename it now to
whatever name you want. For example prod.yaml.
2. This file will appear at /flink/usrlib/prod.yaml
3. You would also need to specify this file by adding the following to your
flink-conf.yaml:

statefun.remote.module-name: /flink/usrlib/prod.yaml

4. At the bottom of this page [2] you can see a full example, and how to
add additional flink configurations (flinkConfiguration) property.

I hope this helps,
Igal.

[1] https://github.com/igalshilman/flink-statefun/tree/custom_module
[2]
https://docs.ververica.com/user_guide/application_operations/deployments/index.html#deployment-defaults


On Thu, Dec 9, 2021 at 12:22 PM Deniz Koçak <le...@gmail.com> wrote:

> Hi Fabian,
>
> Thanks for that solution.. I've removed the module.yaml file from the
> jar file assuming that it should be fetched from s3 and used by the
> job. I've tried this on our job, but still its seems to be failing.
>
> From the logs module.yaml file seems to be fetched from s3 bucket.
> ----
> com.ververica.platform.artifactfetcher.ArtifactFetcherEntryPoint -
> Finished fetching
> s3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module.yaml into
> /flink/usrlib/module.yaml
> ----
>
> However we got that exception below:
>
> ----
> Caused by: java.lang.IllegalStateException: There are no ingress
> defined. at
> org.apache.flink.statefun.flink.core.StatefulFunctionsUniverseValidator.validate(StatefulFunctionsUniverseValidator.java:25)
> ~[?:?]
> ----
>
> Please let me know if you need further information. Thanks again for your
> help.
>
> Deniz
>
> On Wed, Dec 8, 2021 at 1:20 PM Fabian Paul <fp...@apache.org> wrote:
> >
> > Hi Deniz,
> >
> > Great to hear from someone using Ververica Platform with StateFun.
> > When deploying your job you can specify `additionalConfigurations`[1]
> > that are also pulled and put into the classpath.
> >
> > Hopefully, that is suitable for your scenario.
> >
> > Best,
> > Fabian
> >
> > [1]
> https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies
> >
> > On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <le...@gmail.com> wrote:
> > >
> > > Hi Igal,
> > >
> > > We are using official images from Ververica as the Flink installation.
> > > Actually, I was hoping to specify the name of file names to use during
> > > the runtime via `mainArgs` in the deployment configuration (or any
> > > other way may be). By this way we can specify the target yaml files,
> > > but I think this is not possible?
> > >
> > > =======================
> > > kind: JAR
> > > mainArgs: '--active-profile nxt'
> > > =======================
> > >
> > > Therefore, it's easier to use single jar in our pipelines instead of
> > > creating a different jar file for each env. (at least for development
> > > and production).
> > >
> > > For solution 2, you refer flink distro. , like /flink/lib folder in
> > > the official Docker image?
> > >
> > > Thanks,
> > > Deniz
> > >
> > > On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org> wrote:
> > > >
> > > > Hi Deniz,
> > > >
> > > > StateFun would be looking for module.yaml(s) in the classpath.
> > > > If you are submitting the job to an existing Flink cluster this
> really means that it needs to be either:
> > > > 1. packaged with the jar (like you are already doing)
> > > > 2. be present at the classpath, this means that you can place your
> module.yaml at the /lib directory of your Flink installation, I suppose
> that you have different installations in different environments.
> > > >
> > > > I'm not aware of a way to submit any additional files with the jar
> via the flink cli, but perhaps someone else can chime in :-)
> > > >
> > > > Cheers,
> > > > Igal.
> > > >
> > > >
> > > > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com>
> wrote:
> > > >>
> > > >> Hi,
> > > >>
> > > >> We have a simple stateful-function job, consuming from Kafka,
> calling
> > > >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and
> publishing
> > > >> the result back via Kafka again.
> > > >>
> > > >> * We created a jar file to be deployed on a standalone cluster (it's
> > > >> not a docker Image), therefore we add `statefun-flink-distribution`
> > > >> version 3.0.0 as a dependency in that jar file.
> > > >> * Entry class in our job configuration is
> > > >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> > > >> simply keep a single module.yaml file in resources folder for the
> > > >> module configuration.
> > > >>
> > > >> My question here is, we would like to deploy that jar to different
> > > >> environments (dev. and prod.) and not sure how we can pass different
> > > >> module configurations (module.yaml or
> module_nxt.yaml/module_prd.yaml)
> > > >> to the job during startup without creating separate jar files for
> > > >> different environments?
> > > >>
> > > >> Thanks,
> > > >> Deniz
>

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Deniz Koçak <le...@gmail.com>.
Hi Fabian,

Thanks for that solution.. I've removed the module.yaml file from the
jar file assuming that it should be fetched from s3 and used by the
job. I've tried this on our job, but still its seems to be failing.

From the logs module.yaml file seems to be fetched from s3 bucket.
----
com.ververica.platform.artifactfetcher.ArtifactFetcherEntryPoint -
Finished fetching
s3://rttk8s-nxt-v2/vvp/artifacts/namespaces/default/module.yaml into
/flink/usrlib/module.yaml
----

However we got that exception below:

----
Caused by: java.lang.IllegalStateException: There are no ingress
defined. at org.apache.flink.statefun.flink.core.StatefulFunctionsUniverseValidator.validate(StatefulFunctionsUniverseValidator.java:25)
~[?:?]
----

Please let me know if you need further information. Thanks again for your help.

Deniz

On Wed, Dec 8, 2021 at 1:20 PM Fabian Paul <fp...@apache.org> wrote:
>
> Hi Deniz,
>
> Great to hear from someone using Ververica Platform with StateFun.
> When deploying your job you can specify `additionalConfigurations`[1]
> that are also pulled and put into the classpath.
>
> Hopefully, that is suitable for your scenario.
>
> Best,
> Fabian
>
> [1] https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies
>
> On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <le...@gmail.com> wrote:
> >
> > Hi Igal,
> >
> > We are using official images from Ververica as the Flink installation.
> > Actually, I was hoping to specify the name of file names to use during
> > the runtime via `mainArgs` in the deployment configuration (or any
> > other way may be). By this way we can specify the target yaml files,
> > but I think this is not possible?
> >
> > =======================
> > kind: JAR
> > mainArgs: '--active-profile nxt'
> > =======================
> >
> > Therefore, it's easier to use single jar in our pipelines instead of
> > creating a different jar file for each env. (at least for development
> > and production).
> >
> > For solution 2, you refer flink distro. , like /flink/lib folder in
> > the official Docker image?
> >
> > Thanks,
> > Deniz
> >
> > On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org> wrote:
> > >
> > > Hi Deniz,
> > >
> > > StateFun would be looking for module.yaml(s) in the classpath.
> > > If you are submitting the job to an existing Flink cluster this really means that it needs to be either:
> > > 1. packaged with the jar (like you are already doing)
> > > 2. be present at the classpath, this means that you can place your module.yaml at the /lib directory of your Flink installation, I suppose that you have different installations in different environments.
> > >
> > > I'm not aware of a way to submit any additional files with the jar via the flink cli, but perhaps someone else can chime in :-)
> > >
> > > Cheers,
> > > Igal.
> > >
> > >
> > > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com> wrote:
> > >>
> > >> Hi,
> > >>
> > >> We have a simple stateful-function job, consuming from Kafka, calling
> > >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
> > >> the result back via Kafka again.
> > >>
> > >> * We created a jar file to be deployed on a standalone cluster (it's
> > >> not a docker Image), therefore we add `statefun-flink-distribution`
> > >> version 3.0.0 as a dependency in that jar file.
> > >> * Entry class in our job configuration is
> > >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> > >> simply keep a single module.yaml file in resources folder for the
> > >> module configuration.
> > >>
> > >> My question here is, we would like to deploy that jar to different
> > >> environments (dev. and prod.) and not sure how we can pass different
> > >> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
> > >> to the job during startup without creating separate jar files for
> > >> different environments?
> > >>
> > >> Thanks,
> > >> Deniz

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Fabian Paul <fp...@apache.org>.
Hi Deniz,

Great to hear from someone using Ververica Platform with StateFun.
When deploying your job you can specify `additionalConfigurations`[1]
that are also pulled and put into the classpath.

Hopefully, that is suitable for your scenario.

Best,
Fabian

[1] https://docs.ververica.com/user_guide/application_operations/deployments/artifacts.html?highlight=additionaldependencies

On Fri, Dec 3, 2021 at 4:51 PM Deniz Koçak <le...@gmail.com> wrote:
>
> Hi Igal,
>
> We are using official images from Ververica as the Flink installation.
> Actually, I was hoping to specify the name of file names to use during
> the runtime via `mainArgs` in the deployment configuration (or any
> other way may be). By this way we can specify the target yaml files,
> but I think this is not possible?
>
> =======================
> kind: JAR
> mainArgs: '--active-profile nxt'
> =======================
>
> Therefore, it's easier to use single jar in our pipelines instead of
> creating a different jar file for each env. (at least for development
> and production).
>
> For solution 2, you refer flink distro. , like /flink/lib folder in
> the official Docker image?
>
> Thanks,
> Deniz
>
> On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org> wrote:
> >
> > Hi Deniz,
> >
> > StateFun would be looking for module.yaml(s) in the classpath.
> > If you are submitting the job to an existing Flink cluster this really means that it needs to be either:
> > 1. packaged with the jar (like you are already doing)
> > 2. be present at the classpath, this means that you can place your module.yaml at the /lib directory of your Flink installation, I suppose that you have different installations in different environments.
> >
> > I'm not aware of a way to submit any additional files with the jar via the flink cli, but perhaps someone else can chime in :-)
> >
> > Cheers,
> > Igal.
> >
> >
> > On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> We have a simple stateful-function job, consuming from Kafka, calling
> >> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
> >> the result back via Kafka again.
> >>
> >> * We created a jar file to be deployed on a standalone cluster (it's
> >> not a docker Image), therefore we add `statefun-flink-distribution`
> >> version 3.0.0 as a dependency in that jar file.
> >> * Entry class in our job configuration is
> >> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> >> simply keep a single module.yaml file in resources folder for the
> >> module configuration.
> >>
> >> My question here is, we would like to deploy that jar to different
> >> environments (dev. and prod.) and not sure how we can pass different
> >> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
> >> to the job during startup without creating separate jar files for
> >> different environments?
> >>
> >> Thanks,
> >> Deniz

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Deniz Koçak <le...@gmail.com>.
Hi Igal,

We are using official images from Ververica as the Flink installation.
Actually, I was hoping to specify the name of file names to use during
the runtime via `mainArgs` in the deployment configuration (or any
other way may be). By this way we can specify the target yaml files,
but I think this is not possible?

=======================
kind: JAR
mainArgs: '--active-profile nxt'
=======================

Therefore, it's easier to use single jar in our pipelines instead of
creating a different jar file for each env. (at least for development
and production).

For solution 2, you refer flink distro. , like /flink/lib folder in
the official Docker image?

Thanks,
Deniz

On Fri, Dec 3, 2021 at 3:06 PM Igal Shilman <ig...@apache.org> wrote:
>
> Hi Deniz,
>
> StateFun would be looking for module.yaml(s) in the classpath.
> If you are submitting the job to an existing Flink cluster this really means that it needs to be either:
> 1. packaged with the jar (like you are already doing)
> 2. be present at the classpath, this means that you can place your module.yaml at the /lib directory of your Flink installation, I suppose that you have different installations in different environments.
>
> I'm not aware of a way to submit any additional files with the jar via the flink cli, but perhaps someone else can chime in :-)
>
> Cheers,
> Igal.
>
>
> On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com> wrote:
>>
>> Hi,
>>
>> We have a simple stateful-function job, consuming from Kafka, calling
>> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
>> the result back via Kafka again.
>>
>> * We created a jar file to be deployed on a standalone cluster (it's
>> not a docker Image), therefore we add `statefun-flink-distribution`
>> version 3.0.0 as a dependency in that jar file.
>> * Entry class in our job configuration is
>> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
>> simply keep a single module.yaml file in resources folder for the
>> module configuration.
>>
>> My question here is, we would like to deploy that jar to different
>> environments (dev. and prod.) and not sure how we can pass different
>> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
>> to the job during startup without creating separate jar files for
>> different environments?
>>
>> Thanks,
>> Deniz

Re: Stateful functions module configurations (module.yaml) per deployment environment

Posted by Igal Shilman <ig...@apache.org>.
Hi Deniz,

StateFun would be looking for module.yaml(s) in the classpath.
If you are submitting the job to an existing Flink cluster this really
means that it needs to be either:
1. packaged with the jar (like you are already doing)
2. be present at the classpath, this means that you can place your
module.yaml at the /lib directory of your Flink installation, I
suppose that you have different installations in different environments.

I'm not aware of a way to submit any additional files with the jar via the
flink cli, but perhaps someone else can chime in :-)

Cheers,
Igal.


On Thu, Dec 2, 2021 at 3:29 PM Deniz Koçak <le...@gmail.com> wrote:

> Hi,
>
> We have a simple stateful-function job, consuming from Kafka, calling
> an HTTP endpoint (on AWS via an Elastic Load Balancer) and publishing
> the result back via Kafka again.
>
> * We created a jar file to be deployed on a standalone cluster (it's
> not a docker Image), therefore we add `statefun-flink-distribution`
> version 3.0.0 as a dependency in that jar file.
> * Entry class in our job configuration is
> `org.apache.flink.statefun.flink.core.StatefulFunctionsJob` and we
> simply keep a single module.yaml file in resources folder for the
> module configuration.
>
> My question here is, we would like to deploy that jar to different
> environments (dev. and prod.) and not sure how we can pass different
> module configurations (module.yaml or module_nxt.yaml/module_prd.yaml)
> to the job during startup without creating separate jar files for
> different environments?
>
> Thanks,
> Deniz
>