You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by sundy <54...@qq.com> on 2018/03/08 02:11:20 UTC

Job is be cancelled, but the stdout log still prints

Hi:

I faced a problem, the taskmanagers in 3 nodes are still running, I make sure that all job are cancelled,  but I could see that stdout logs are still printing all the way. The job's parallelism is 6.

I wrote a scheduled pool like this

static {
  Executors.newScheduledThreadPool(1).scheduleAtFixedRate(new Runnable() {
    @Override
    public void run() {
      try {
        getLiveInfo();
      } catch (Exception e) {
        e.printStackTrace();
      }
    }
  }, 0, 60, TimeUnit.SECONDS);
}
Is that the static methods will still be running in the taskmanagers even if the job is cancelled? That’s weird.

Re: Job is be cancelled, but the stdout log still prints

Posted by sundy <54...@qq.com>.
I got it. That’s really a big problem.

Thank you very much

> On 8 Mar 2018, at 21:03, kedar mhaswade <ke...@gmail.com> wrote:
> 
> Also, in addition to what Gary said, if you take Flink completely out of picture and wrote a simple Java class with a main method and the static block (!) which does some long running task like getLiveInfo(), then chances are that your class will make the JVM hang!
> 
> Basically what you are doing is start a bunch of threads (which are perhaps non-daemon by default) and leave them running. Since there is at least one non-daemon thread that is running, the JVM is not allowed to shut down, causing the hang.
> 
> Regards,
> Kedar


Re: Job is be cancelled, but the stdout log still prints

Posted by kedar mhaswade <ke...@gmail.com>.
Also, in addition to what Gary said, if you take Flink completely out of
picture and wrote a simple Java class with a main method and the static
block (!) which does some long running task like getLiveInfo(), then
chances are that your class will make the JVM hang!

Basically what you are doing is start a bunch of threads (which are perhaps
non-daemon by default) and leave them running. Since there is at least one
non-daemon thread that is running, the JVM is not allowed to shut down,
causing the hang.

Regards,
Kedar


On Thu, Mar 8, 2018 at 3:15 AM, Gary Yao <ga...@data-artisans.com> wrote:

> Hi,
>
> You are not shutting down the ScheduledExecutorService [1], which means
> that
> after job cancelation the thread will continue running getLiveInfo(). The
> user
> code class loader, and your classes won't be garbage collected. You should
> use
> the RichFunction#close callback to shutdown your thread pool [2].
>
> Best,
> Gary
>
> [1] https://stackoverflow.com/questions/10504172/how-to-
> shutdown-an-executorservice
> [2] https://ci.apache.org/projects/flink/flink-docs-
> release-1.4/dev/api_concepts.html#rich-functions
>
>
> On Thu, Mar 8, 2018 at 3:11 AM, sundy <54...@qq.com> wrote:
>
>>
>> Hi:
>>
>> I faced a problem, the taskmanagers in 3 nodes are still running, I make
>> sure that all job are cancelled,  but I could see that stdout logs are
>> still printing all the way. The job's parallelism is 6.
>>
>> I wrote a scheduled pool like this
>>
>> static {
>>   Executors.newScheduledThreadPool(1).scheduleAtFixedRate(new Runnable() {
>>     @Override
>>     public void run() {
>>       try {
>>         getLiveInfo();
>>       } catch (Exception e) {
>>         e.printStackTrace();
>>       }
>>     }
>>   }, 0, 60, TimeUnit.SECONDS);
>> }
>>
>> Is that the static methods will still be running in the taskmanagers even
>> if the job is cancelled? That’s weird.
>>
>
>

Re: Job is be cancelled, but the stdout log still prints

Posted by Gary Yao <ga...@data-artisans.com>.
Hi,

You are not shutting down the ScheduledExecutorService [1], which means that
after job cancelation the thread will continue running getLiveInfo(). The
user
code class loader, and your classes won't be garbage collected. You should
use
the RichFunction#close callback to shutdown your thread pool [2].

Best,
Gary

[1]
https://stackoverflow.com/questions/10504172/how-to-shutdown-an-executorservice
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/api_concepts.html#rich-functions


On Thu, Mar 8, 2018 at 3:11 AM, sundy <54...@qq.com> wrote:

>
> Hi:
>
> I faced a problem, the taskmanagers in 3 nodes are still running, I make
> sure that all job are cancelled,  but I could see that stdout logs are
> still printing all the way. The job's parallelism is 6.
>
> I wrote a scheduled pool like this
>
> static {
>   Executors.newScheduledThreadPool(1).scheduleAtFixedRate(new Runnable() {
>     @Override
>     public void run() {
>       try {
>         getLiveInfo();
>       } catch (Exception e) {
>         e.printStackTrace();
>       }
>     }
>   }, 0, 60, TimeUnit.SECONDS);
> }
>
> Is that the static methods will still be running in the taskmanagers even
> if the job is cancelled? That’s weird.
>