You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Francisco Gonzalez Barea <Fr...@piksel.com> on 2017/08/01 08:24:05 UTC

Re: Flink CLI cannot submit job to Flink on Mesos

Hi Stephan,

So, do you mean to remove the “-m” param from the flink CLI call? And on the other hand, that I should add the Zookeeper configuration in both sides, the remote flink and locally in the flink CLI config, right?

Regards


On 31 Jul 2017, at 22:21, Stephan Ewen <se...@apache.org>> wrote:

Hi Francisco!

Can you drop the explicit address of the jobmanager? The client should pick up that address automatically from ZooKeeper as well (together with the HA leader session ID).

Please check if you have the ZooKeeper HA config entries in the config used by the CLI.

Stephan


On Mon, Jul 31, 2017 at 6:27 PM, Francisco Gonzalez Barea <Fr...@piksel.com>> wrote:
Hi again,

On the other hand, we are running the following flink CLI command:

./flink run -d -m ${jobmanager.rpc.address}:${jobmanager.rpc.port}  ${our-program-jar} ${our-program-params}

Maybe is the command what we are using wrongly?

Thank you

On 28 Jul 2017, at 11:07, Till Rohrmann <tr...@apache.org>> wrote:

Hi Francisco,

have you set the right high-availability configuration options in your client configuration as described here [1]? If not, then Flink is not able to find the correct JobManager because it retrieves the address as well as a fencing token (called leader session id) from the HA store (ZooKeeper).

[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/mesos.html#high-availability

Cheers,
Till

On Thu, Jul 27, 2017 at 6:20 PM, Francisco Gonzalez Barea <Fr...@piksel.com>> wrote:
Hello,

We´re having lot of issues while trying to submit a job remotely using the Flink CLI command line tool. We have tried different configurations but in all of them we get errors from AKKA while trying to connect. I will try to summarise the configurations we´ve tried.

- Flink 1.3.0 deployed within a docker container on a Mesos cluster (using Marathon)
- This flink has the property jobmanager.rpc.address as a hostname (i.e. kind of ip-XXXXXXXXX.eu<http://ip-xxxxxxxxx.eu/>.west-1.compute.internal)
- Use the same version for Flink Client remotely (e.g. in my laptop).

When I try to submit the job using the command flink run -m myHostName:myPort (the same in jobmanager.rpc.address and jobmanager.rpc.port) after some time waiting I get the trace at the end of this email. In the flink side we get this error from AKKA:

Association with remote system [akka.tcp://flink@10.203.23.24:24469] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://flink@10.203.23.24:24469]] Caused by: [Connection refused: /10.203.23.24:24469<http://10.203.23.24:24469/>]

After reading a bit, it seems there´re some problems related to akka resolving hostnames to ips, so we decided to startup the same flink but changing jobmanager.rpc.address to have the direct ip (i.e. kind of XX.XXX.XX.XX). In this case I´m getting same trace (at the end of the email) from the client side and this one from the Flink server:

Discard message LeaderSessionMessage(00000000-0000-0000-0000-000000000000,SubmitJob(JobGraph(jobId: b25d5c5ced962632abc5ee9ef867792e),DETACHED)) because the expected leader session ID b4f53899-5d70-467e-8e9d-e56eeb60b6e3 did not equal the received leader session ID 00000000-0000-0000-0000-000000000000.

We have tried some other stuff but without success… any clue that could help us?

Thanks in advance!

org.apache.flink.client.program.ProgramInvocationException: The program execution failed: JobManager did not respond within 60000 milliseconds
at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:454)
at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:99)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
at org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:76)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:345)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.do<http://javax.security.auth.subject.do/>As(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
Caused by: org.apache.flink.runtime.client.JobTimeoutException: JobManager did not respond within 60000 milliseconds
at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:426)
at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:451)
... 15 more
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [60000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at scala.concurrent.Await.result(package.scala)
at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:423)
... 16 more




This message is private and confidential. If you have received this message in error, please notify the sender or servicedesk@piksel.com<ma...@piksel.com> and remove it from your system.

Piksel Inc is a company registered in the United States, 2100 Powers Ferry Road SE, Suite 400, Atlanta, GA 30339





Re: Flink CLI cannot submit job to Flink on Mesos

Posted by Stephan Ewen <se...@apache.org>.
Cool, good to hear!

It is one of those "it a feature, not a bug" situations ;-)

Flink's HA mode supports multiple masters, so the CLI needs to have a way
to find which master is "leader" (active, versus the passive masters on
standby). That discovery goes through ZooKeeper as well (which is the
ground truth for who is the leader).

Stephan


On Tue, Aug 1, 2017 at 11:36 AM, Francisco Gonzalez Barea <
Francisco.Gonzalez@piksel.com> wrote:

> Hey! It´s working now!!
>
> I will do a summary for those who might have the same problem in the
> future:
>
> - *Flink 1.3.0 dockerized on Mesos:*
> - Add the HA configuration values in your flink app:
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/setup/jobmanager_high_availability.html#config-
> file-flink-confyaml
> - Add the Mesos HA configuration values in your flink app:
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/setup/mesos.html#high-availability
>
> -* Flink CLI 1.3.0 on my local machine* (make sure you *use the same
> version*!!)
> - Add same HA configuration values in your flink CLI configuration:
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/
> jobmanager_high_availability.html#config-file-flink-confyaml
>
>
> With those steps, my ./fink run command it´s working like a charm.
>
> Thank you very much guys!
>
> Regards,
> Francisco
>
>
>
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml>
>
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml>
> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml>
>
> On 1 Aug 2017, at 10:24, Francisco Gonzalez Barea <
> Francisco.Gonzalez@piksel.com> wrote:
>
> Hi Stephan,
>
> So, do you mean to remove the “-m” param from the flink CLI call? And on
> the other hand, that I should add the Zookeeper configuration in both
> sides, the remote flink and locally in the flink CLI config, right?
>
> Regards
>
>
> On 31 Jul 2017, at 22:21, Stephan Ewen <se...@apache.org> wrote:
>
> Hi Francisco!
>
> Can you drop the explicit address of the jobmanager? The client should
> pick up that address automatically from ZooKeeper as well (together with
> the HA leader session ID).
>
> Please check if you have the ZooKeeper HA config entries in the config
> used by the CLI.
>
> Stephan
>
>
> On Mon, Jul 31, 2017 at 6:27 PM, Francisco Gonzalez Barea <
> Francisco.Gonzalez@piksel.com> wrote:
>
>> Hi again,
>>
>> On the other hand, we are running the following flink CLI command:
>>
>> ./flink run -d -m ${jobmanager.rpc.address}:${jobmanager.rpc.port}
>>  ${our-program-jar} ${our-program-params}
>>
>> Maybe is the command what we are using wrongly?
>>
>> Thank you
>>
>> On 28 Jul 2017, at 11:07, Till Rohrmann <tr...@apache.org> wrote:
>>
>> Hi Francisco,
>>
>> have you set the right high-availability configuration options in your
>> client configuration as described here [1]? If not, then Flink is not able
>> to find the correct JobManager because it retrieves the address as well as
>> a fencing token (called leader session id) from the HA store (ZooKeeper).
>>
>> [1] https://ci.apache.org/projects/flink/flink-docs-release-
>> 1.3/setup/mesos.html#high-availability
>>
>> Cheers,
>> Till
>>
>> On Thu, Jul 27, 2017 at 6:20 PM, Francisco Gonzalez Barea <
>> Francisco.Gonzalez@piksel.com> wrote:
>>
>>> Hello,
>>>
>>> We´re having lot of issues while trying to submit a job remotely using
>>> the Flink CLI command line tool. We have tried different configurations but
>>> in all of them we get errors from AKKA while trying to connect. I will try
>>> to summarise the configurations we´ve tried.
>>>
>>> - Flink 1.3.0 deployed within a docker container on a Mesos cluster
>>> (using Marathon)
>>> - This flink has the property jobmanager.rpc.address as a hostname (i.e.
>>> kind of ip-XXXXXXXXX.eu <http://ip-xxxxxxxxx.eu/>.west-1.comp
>>> ute.internal)
>>> - Use the same version for Flink Client remotely (e.g. in my laptop).
>>>
>>> When I try to submit the job using the command flink run -m
>>> myHostName:myPort (the same in jobmanager.rpc.address and
>>> jobmanager.rpc.port) after some time waiting I get the trace at the end of
>>> this email. In the flink side we get this error from AKKA:
>>>
>>> Association with remote system [akka.tcp://flink@10.203.23.24:24469]
>>> has failed, address is now gated for [5000] ms. Reason: [Association failed
>>> with [akka.tcp://flink@10.203.23.24:24469]] Caused by: [Connection
>>> refused: /10.203.23.24:24469]
>>>
>>> After reading a bit, it seems there´re some problems related to akka
>>> resolving hostnames to ips, so we decided to startup the same flink but
>>> changing jobmanager.rpc.address to have the direct ip (i.e. kind of
>>> XX.XXX.XX.XX). In this case I´m getting same trace (at the end of the
>>> email) from the client side and this one from the Flink server:
>>>
>>> Discard message LeaderSessionMessage(00000000-
>>> 0000-0000-0000-000000000000,SubmitJob(JobGraph(jobId:
>>> b25d5c5ced962632abc5ee9ef867792e),DETACHED)) because the expected
>>> leader session ID b4f53899-5d70-467e-8e9d-e56eeb60b6e3 did not equal
>>> the received leader session ID 00000000-0000-0000-0000-000000000000.
>>>
>>> We have tried some other stuff but without success… any clue that could
>>> help us?
>>>
>>> Thanks in advance!
>>>
>>> org.apache.flink.client.program.ProgramInvocationException: The program
>>> execution failed: JobManager did not respond within 60000 milliseconds
>>> at org.apache.flink.client.program.ClusterClient.runDetached(Cl
>>> usterClient.java:454)
>>> at org.apache.flink.client.program.StandaloneClusterClient.subm
>>> itJob(StandaloneClusterClient.java:99)
>>> at org.apache.flink.client.program.ClusterClient.run(ClusterCli
>>> ent.java:400)
>>> at org.apache.flink.client.program.DetachedEnvironment.finalize
>>> Execute(DetachedEnvironment.java:76)
>>> at org.apache.flink.client.program.ClusterClient.run(ClusterCli
>>> ent.java:345)
>>> at org.apache.flink.client.CliFrontend.executeProgram(CliFronte
>>> nd.java:831)
>>> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
>>> at org.apache.flink.client.CliFrontend.parseParameters(CliFront
>>> end.java:1073)
>>> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
>>> at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
>>> at org.apache.flink.runtime.security.HadoopSecurityContext$1.ru
>>> n(HadoopSecurityContext.java:43)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.do
>>> <http://javax.security.auth.subject.do/>As(Subject.java:422)
>>> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
>>> upInformation.java:1548)
>>> at org.apache.flink.runtime.security.HadoopSecurityContext.runS
>>> ecured(HadoopSecurityContext.java:40)
>>> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
>>> Caused by: org.apache.flink.runtime.client.JobTimeoutException:
>>> JobManager did not respond within 60000 milliseconds
>>> at org.apache.flink.runtime.client.JobClient.submitJobDetached(
>>> JobClient.java:426)
>>> at org.apache.flink.client.program.ClusterClient.runDetached(Cl
>>> usterClient.java:451)
>>> ... 15 more
>>> Caused by: java.util.concurrent.TimeoutException: Futures timed out
>>> after [60000 milliseconds]
>>> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>>> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.
>>> scala:223)
>>> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>>> at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(B
>>> lockContext.scala:53)
>>> at scala.concurrent.Await$.result(package.scala:190)
>>> at scala.concurrent.Await.result(package.scala)
>>> at org.apache.flink.runtime.client.JobClient.submitJobDetached(
>>> JobClient.java:423)
>>> ... 16 more
>>>
>>>
>>>
>>> This message is private and confidential. If you have received this
>>> message in error, please notify the sender or servicedesk@piksel.com
>>> and remove it from your system.
>>>
>>> Piksel Inc is a company registered in the United States, 2100 Powers
>>> Ferry Road SE, Suite 400, Atlanta, GA 30339
>>>
>>
>>
>>
>
>
>

Re: Flink CLI cannot submit job to Flink on Mesos

Posted by Francisco Gonzalez Barea <Fr...@piksel.com>.
Hey! It´s working now!!

I will do a summary for those who might have the same problem in the future:

- Flink 1.3.0 dockerized on Mesos:
- Add the HA configuration values in your flink app: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml
- Add the Mesos HA configuration values in your flink app: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/mesos.html#high-availability

- Flink CLI 1.3.0 on my local machine (make sure you use the same version!!)
- Add same HA configuration values in your flink CLI configuration: https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml


With those steps, my ./fink run command it´s working like a charm.

Thank you very much guys!

Regards,
Francisco


<https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml>
<https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml><https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/jobmanager_high_availability.html#config-file-flink-confyaml>

On 1 Aug 2017, at 10:24, Francisco Gonzalez Barea <Fr...@piksel.com>> wrote:

Hi Stephan,

So, do you mean to remove the “-m” param from the flink CLI call? And on the other hand, that I should add the Zookeeper configuration in both sides, the remote flink and locally in the flink CLI config, right?

Regards


On 31 Jul 2017, at 22:21, Stephan Ewen <se...@apache.org>> wrote:

Hi Francisco!

Can you drop the explicit address of the jobmanager? The client should pick up that address automatically from ZooKeeper as well (together with the HA leader session ID).

Please check if you have the ZooKeeper HA config entries in the config used by the CLI.

Stephan


On Mon, Jul 31, 2017 at 6:27 PM, Francisco Gonzalez Barea <Fr...@piksel.com>> wrote:
Hi again,

On the other hand, we are running the following flink CLI command:

./flink run -d -m ${jobmanager.rpc.address}:${jobmanager.rpc.port}  ${our-program-jar} ${our-program-params}

Maybe is the command what we are using wrongly?

Thank you

On 28 Jul 2017, at 11:07, Till Rohrmann <tr...@apache.org>> wrote:

Hi Francisco,

have you set the right high-availability configuration options in your client configuration as described here [1]? If not, then Flink is not able to find the correct JobManager because it retrieves the address as well as a fencing token (called leader session id) from the HA store (ZooKeeper).

[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/mesos.html#high-availability

Cheers,
Till

On Thu, Jul 27, 2017 at 6:20 PM, Francisco Gonzalez Barea <Fr...@piksel.com>> wrote:
Hello,

We´re having lot of issues while trying to submit a job remotely using the Flink CLI command line tool. We have tried different configurations but in all of them we get errors from AKKA while trying to connect. I will try to summarise the configurations we´ve tried.

- Flink 1.3.0 deployed within a docker container on a Mesos cluster (using Marathon)
- This flink has the property jobmanager.rpc.address as a hostname (i.e. kind of ip-XXXXXXXXX.eu<http://ip-xxxxxxxxx.eu/>.west-1.compute.internal)
- Use the same version for Flink Client remotely (e.g. in my laptop).

When I try to submit the job using the command flink run -m myHostName:myPort (the same in jobmanager.rpc.address and jobmanager.rpc.port) after some time waiting I get the trace at the end of this email. In the flink side we get this error from AKKA:

Association with remote system [akka.tcp://flink@10.203.23.24:24469] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://flink@10.203.23.24:24469]] Caused by: [Connection refused: /10.203.23.24:24469<http://10.203.23.24:24469/>]

After reading a bit, it seems there´re some problems related to akka resolving hostnames to ips, so we decided to startup the same flink but changing jobmanager.rpc.address to have the direct ip (i.e. kind of XX.XXX.XX.XX). In this case I´m getting same trace (at the end of the email) from the client side and this one from the Flink server:

Discard message LeaderSessionMessage(00000000-0000-0000-0000-000000000000,SubmitJob(JobGraph(jobId: b25d5c5ced962632abc5ee9ef867792e),DETACHED)) because the expected leader session ID b4f53899-5d70-467e-8e9d-e56eeb60b6e3 did not equal the received leader session ID 00000000-0000-0000-0000-000000000000.

We have tried some other stuff but without success… any clue that could help us?

Thanks in advance!

org.apache.flink.client.program.ProgramInvocationException: The program execution failed: JobManager did not respond within 60000 milliseconds
at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:454)
at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:99)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
at org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:76)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:345)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.do<http://javax.security.auth.subject.do/>As(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
Caused by: org.apache.flink.runtime.client.JobTimeoutException: JobManager did not respond within 60000 milliseconds
at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:426)
at org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:451)
... 15 more
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [60000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at scala.concurrent.Await.result(package.scala)
at org.apache.flink.runtime.client.JobClient.submitJobDetached(JobClient.java:423)
... 16 more




This message is private and confidential. If you have received this message in error, please notify the sender or servicedesk@piksel.com<ma...@piksel.com> and remove it from your system.

Piksel Inc is a company registered in the United States, 2100 Powers Ferry Road SE, Suite 400, Atlanta, GA 30339