You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by simpleusr <ce...@gmail.com> on 2019/02/08 14:06:51 UTC

Flink Standalone cluster - dumps

Flink Standalone cluster - dumps

We are using standalone cluster and submittig jobs through command line
client.

As far as I understand, the job is executed in task manager. A single task
manager represents a single jvm? So the dump shows threads from all jobs
bound to task manager.
Two questions:

1) Is there a special setting so that each task manager is occupied by a
single job. Can setting slot size to 1 be a workaround?
2) If option above is not possible is there any way to get dump per job?


Regards



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Flink Standalone cluster - dumps

Posted by simpleusr <ce...@gmail.com>.
Hi Chesnay,

Many thanks..



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Flink Standalone cluster - dumps

Posted by Chesnay Schepler <ch...@apache.org>.
1) Setting the slot size to 1 can be used as a work-around. I'm not 
aware of another solution for standalone clusters. In the 
YARN/Kubernetes world we support the notion of a "job cluster", which is 
started and run only for a single job, but this isn't supported in 
standalone mode.
2) None that I'm aware of unfortunately.

On 08.02.2019 15:06, simpleusr wrote:
> Flink Standalone cluster - dumps
>
> We are using standalone cluster and submittig jobs through command line
> client.
>
> As far as I understand, the job is executed in task manager. A single task
> manager represents a single jvm? So the dump shows threads from all jobs
> bound to task manager.
> Two questions:
>
> 1) Is there a special setting so that each task manager is occupied by a
> single job. Can setting slot size to 1 be a workaround?
> 2) If option above is not possible is there any way to get dump per job?
>
>
> Regards
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>