You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Theofilos Kakantousis <tk...@kth.se> on 2016/06/10 14:00:26 UTC

Application log on Yarn FlinkCluster

Hi all,

Flink 1.0.3
Hadoop 2.4.0

When running a job on a Flink Cluster on Yarn, the application output is 
not included in the Yarn log. Instead, it is only printed in the stdout 
from where I run my program.  For the jobmanager, I'm using the 
log4j.properties file from the flink/conf directory. Yarn log 
aggregation is enabled and the YarnJobManager log is printed in the yarn 
log. The application is submitted by a Flink Client to the 
FlinkYarnCluster using a PackagedProgram.

Is this expected behavior and if so, is there a way to include the 
application output in the Yarn aggregated log? Thanks!

Cheers,
Theofilos


Re: Application log on Yarn FlinkCluster

Posted by Theofilos Kakantousis <tk...@kth.se>.
Great then, I will look into my configuration. Thanks for your help!

Cheers,
Theofilos

On 6/15/2016 2:00 PM, Maximilian Michels wrote:
> You should also see TaskManager output in the logs. I just verified 
> this using Flink 1.0.3 with Hadoop 2.7.1. I executed the Iterate 
> example and it aggregated correctly including the TaskManager logs.
>
> I'm wondering, is there anything in the Hadoop logs of the 
> Resourcemanager/Nodemanager that could indicate a transfer failure?
>
> On Wed, Jun 15, 2016 at 10:18 AM, Theofilos Kakantousis <tkak@kth.se 
> <ma...@kth.se>> wrote:
>
>     Hi,
>
>     By yarn aggregated log I mean Yarn log aggregation is enabled and
>     the log I'm referring to is the one returned by `yarn logs
>     -applicationId <id>`. When running a Spark job for example on the
>     same setup, the yarn aggregated log contains all the information
>     printed out by the application.
>
>     Cheers,
>     Theofilos
>
>
>     On 6/15/2016 10:14 AM, Maximilian Michels wrote:
>>     Please use the `yarn logs -applicationId <id>` to retrieve the
>>     logs. If you have enabled log aggregation, this will give you all
>>     container logs concatenated.
>>
>>     Cheers,
>>     Max
>>
>>     On Wed, Jun 15, 2016 at 12:24 AM, Theofilos Kakantousis
>>     <tkak@kth.se <ma...@kth.se>> wrote:
>>
>>         Hi Max,
>>
>>         The runBlocking(..) problem was due to a Netty dependency
>>         issue on my project, it works fine now :)
>>
>>         To pinpoint the logging issue, I just ran a single flink job
>>         on yarn as per the documentation "./bin/flink run -m
>>         yarn-cluster -yn 2 ./examples/streaming/Iteration.jar" and I
>>         have the same issue.During the job I can see in the
>>         containers the taskmanager logs, and a sample output from the
>>         taskmanager.out is the following:
>>         "cat
>>         /srv/hadoop/logs/userlogs/application_1465901188070_0037/container_1465901188070_0037_01_000002/taskmanager.out
>>
>>         2> ((49,1),3)
>>         2> ((25,11),4)
>>         2> ((46,44),2
>>         .."
>>
>>         However, the yarn aggregated log contains only the jobmanager
>>         output. Is this expected or could it indicate a problem with
>>         my hadoop logging configuration not picking up taskmanager logs?
>>
>>         Cheers,
>>         Theofilos
>>
>>
>>         On 6/13/2016 12:13 PM, Maximilian Michels wrote:
>>
>>             Hi Theofilos,
>>
>>             Flink doesn't send the local client output to the Yarn
>>             cluster. I
>>             think this will only change once we move the entire
>>             execution of the
>>             Job to the cluster framework. All output of the actual
>>             Flink job
>>             should be within the JobManager or TaskManager logs.
>>
>>             There is something wrong with the network communication
>>             if the Client
>>             doesn't return from `runBlocking(..)`. Would be
>>             interesting to take a
>>             look at the logs to find out why that could be.
>>
>>             Cheers,
>>             Max
>>
>>
>>             On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis
>>             <tkak@kth.se <ma...@kth.se>> wrote:
>>
>>                 Hi Robert,
>>
>>                 Thanks for the prompt reply. I'm using the
>>                 IterateExample from Flink
>>                 examples. In the yarn log I get entries for the
>>                 YarnJobManager and
>>                 ExecutionGraph, but I was wondering if there is a way
>>                 to push all the
>>                 logging that the client produces into the yarn log.
>>                 Including the System.out
>>                 calls. Is there a way to modify the example to use a
>>                 logging framework to
>>                 achieve it?
>>
>>                 Also when I submit the program using the Client
>>                 runBlocking method, although
>>                 I see in the taskmanager and jobmanager log that the
>>                 application has
>>                 finished, the runBlocking method does not return.
>>                 Should I call it in a
>>                 separate thread?
>>
>>                 Cheers,
>>                 Theofilos
>>
>>                 On 2016-06-10 22:12, Robert Metzger wrote:
>>
>>                 Hi Theofilos,
>>
>>                 how exactly are you writing the application output?
>>                 Are you using a logging framework?
>>                 Are you writing the log statements from the open(),
>>                 map(), invoke() methods
>>                 or from some constructors? (I'm asking since
>>                 different parts are executed on
>>                 the cluster and locally).
>>
>>                 On Fri, Jun 10, 2016 at 4:00 PM, Theofilos
>>                 Kakantousis <tkak@kth.se <ma...@kth.se>> wrote:
>>
>>                     Hi all,
>>
>>                     Flink 1.0.3
>>                     Hadoop 2.4.0
>>
>>                     When running a job on a Flink Cluster on Yarn,
>>                     the application output is
>>                     not included in the Yarn log. Instead, it is only
>>                     printed in the stdout from
>>                     where I run my program.  For the jobmanager, I'm
>>                     using the log4j.properties
>>                     file from the flink/conf directory. Yarn log
>>                     aggregation is enabled and the
>>                     YarnJobManager log is printed in the yarn log.
>>                     The application is submitted
>>                     by a Flink Client to the FlinkYarnCluster using a
>>                     PackagedProgram.
>>
>>                     Is this expected behavior and if so, is there a
>>                     way to include the
>>                     application output in the Yarn aggregated log?
>>                     Thanks!
>>
>>                     Cheers,
>>                     Theofilos
>>
>>
>>
>>
>
>


Re: Application log on Yarn FlinkCluster

Posted by Maximilian Michels <mx...@apache.org>.
You should also see TaskManager output in the logs. I just verified this
using Flink 1.0.3 with Hadoop 2.7.1. I executed the Iterate example and it
aggregated correctly including the TaskManager logs.

I'm wondering, is there anything in the Hadoop logs of the
Resourcemanager/Nodemanager that could indicate a transfer failure?

On Wed, Jun 15, 2016 at 10:18 AM, Theofilos Kakantousis <tk...@kth.se> wrote:

> Hi,
>
> By yarn aggregated log I mean Yarn log aggregation is enabled and the log
> I'm referring to is the one returned by `yarn logs -applicationId <id>`.
> When running a Spark job for example on the same setup, the yarn aggregated
> log contains all the information printed out by the application.
>
> Cheers,
> Theofilos
>
>
> On 6/15/2016 10:14 AM, Maximilian Michels wrote:
>
> Please use the `yarn logs -applicationId <id>` to retrieve the logs. If
> you have enabled log aggregation, this will give you all container logs
> concatenated.
>
> Cheers,
> Max
>
> On Wed, Jun 15, 2016 at 12:24 AM, Theofilos Kakantousis < <tk...@kth.se>
> tkak@kth.se> wrote:
>
>> Hi Max,
>>
>> The runBlocking(..) problem was due to a Netty dependency issue on my
>> project, it works fine now :)
>>
>> To pinpoint the logging issue, I just ran a single flink job on yarn as
>> per the documentation "./bin/flink run -m yarn-cluster -yn 2
>> ./examples/streaming/Iteration.jar" and I have the same issue.During the
>> job I can see in the containers the taskmanager logs, and a sample output
>> from the taskmanager.out is the following:
>> "cat
>> /srv/hadoop/logs/userlogs/application_1465901188070_0037/container_1465901188070_0037_01_000002/taskmanager.out
>>
>> 2> ((49,1),3)
>> 2> ((25,11),4)
>> 2> ((46,44),2
>> .."
>>
>> However, the yarn aggregated log contains only the jobmanager output. Is
>> this expected or could it indicate a problem with my hadoop logging
>> configuration not picking up taskmanager logs?
>>
>> Cheers,
>> Theofilos
>>
>>
>> On 6/13/2016 12:13 PM, Maximilian Michels wrote:
>>
>>> Hi Theofilos,
>>>
>>> Flink doesn't send the local client output to the Yarn cluster. I
>>> think this will only change once we move the entire execution of the
>>> Job to the cluster framework. All output of the actual Flink job
>>> should be within the JobManager or TaskManager logs.
>>>
>>> There is something wrong with the network communication if the Client
>>> doesn't return from `runBlocking(..)`. Would be interesting to take a
>>> look at the logs to find out why that could be.
>>>
>>> Cheers,
>>> Max
>>>
>>>
>>> On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis <tk...@kth.se>
>>> wrote:
>>>
>>>> Hi Robert,
>>>>
>>>> Thanks for the prompt reply. I'm using the IterateExample from Flink
>>>> examples. In the yarn log I get entries for the YarnJobManager and
>>>> ExecutionGraph, but I was wondering if there is a way to push all the
>>>> logging that the client produces into the yarn log. Including the
>>>> System.out
>>>> calls. Is there a way to modify the example to use a logging framework
>>>> to
>>>> achieve it?
>>>>
>>>> Also when I submit the program using the Client runBlocking method,
>>>> although
>>>> I see in the taskmanager and jobmanager log that the application has
>>>> finished, the runBlocking method does not return. Should I call it in a
>>>> separate thread?
>>>>
>>>> Cheers,
>>>> Theofilos
>>>>
>>>> On 2016-06-10 22:12, Robert Metzger wrote:
>>>>
>>>> Hi Theofilos,
>>>>
>>>> how exactly are you writing the application output?
>>>> Are you using a logging framework?
>>>> Are you writing the log statements from the open(), map(), invoke()
>>>> methods
>>>> or from some constructors? (I'm asking since different parts are
>>>> executed on
>>>> the cluster and locally).
>>>>
>>>> On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tk...@kth.se>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Flink 1.0.3
>>>>> Hadoop 2.4.0
>>>>>
>>>>> When running a job on a Flink Cluster on Yarn, the application output
>>>>> is
>>>>> not included in the Yarn log. Instead, it is only printed in the
>>>>> stdout from
>>>>> where I run my program.  For the jobmanager, I'm using the
>>>>> log4j.properties
>>>>> file from the flink/conf directory. Yarn log aggregation is enabled
>>>>> and the
>>>>> YarnJobManager log is printed in the yarn log. The application is
>>>>> submitted
>>>>> by a Flink Client to the FlinkYarnCluster using a PackagedProgram.
>>>>>
>>>>> Is this expected behavior and if so, is there a way to include the
>>>>> application output in the Yarn aggregated log? Thanks!
>>>>>
>>>>> Cheers,
>>>>> Theofilos
>>>>>
>>>>>
>>>>
>>
>
>

Re: Application log on Yarn FlinkCluster

Posted by Theofilos Kakantousis <tk...@kth.se>.
Hi,

By yarn aggregated log I mean Yarn log aggregation is enabled and the 
log I'm referring to is the one returned by `yarn logs -applicationId 
<id>`. When running a Spark job for example on the same setup, the yarn 
aggregated log contains all the information printed out by the application.

Cheers,
Theofilos

On 6/15/2016 10:14 AM, Maximilian Michels wrote:
> Please use the `yarn logs -applicationId <id>` to retrieve the logs. 
> If you have enabled log aggregation, this will give you all container 
> logs concatenated.
>
> Cheers,
> Max
>
> On Wed, Jun 15, 2016 at 12:24 AM, Theofilos Kakantousis <tkak@kth.se 
> <ma...@kth.se>> wrote:
>
>     Hi Max,
>
>     The runBlocking(..) problem was due to a Netty dependency issue on
>     my project, it works fine now :)
>
>     To pinpoint the logging issue, I just ran a single flink job on
>     yarn as per the documentation "./bin/flink run -m yarn-cluster -yn
>     2 ./examples/streaming/Iteration.jar" and I have the same
>     issue.During the job I can see in the containers the taskmanager
>     logs, and a sample output from the taskmanager.out is the following:
>     "cat
>     /srv/hadoop/logs/userlogs/application_1465901188070_0037/container_1465901188070_0037_01_000002/taskmanager.out
>
>     2> ((49,1),3)
>     2> ((25,11),4)
>     2> ((46,44),2
>     .."
>
>     However, the yarn aggregated log contains only the jobmanager
>     output. Is this expected or could it indicate a problem with my
>     hadoop logging configuration not picking up taskmanager logs?
>
>     Cheers,
>     Theofilos
>
>
>     On 6/13/2016 12:13 PM, Maximilian Michels wrote:
>
>         Hi Theofilos,
>
>         Flink doesn't send the local client output to the Yarn cluster. I
>         think this will only change once we move the entire execution
>         of the
>         Job to the cluster framework. All output of the actual Flink job
>         should be within the JobManager or TaskManager logs.
>
>         There is something wrong with the network communication if the
>         Client
>         doesn't return from `runBlocking(..)`. Would be interesting to
>         take a
>         look at the logs to find out why that could be.
>
>         Cheers,
>         Max
>
>
>         On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis
>         <tkak@kth.se <ma...@kth.se>> wrote:
>
>             Hi Robert,
>
>             Thanks for the prompt reply. I'm using the IterateExample
>             from Flink
>             examples. In the yarn log I get entries for the
>             YarnJobManager and
>             ExecutionGraph, but I was wondering if there is a way to
>             push all the
>             logging that the client produces into the yarn log.
>             Including the System.out
>             calls. Is there a way to modify the example to use a
>             logging framework to
>             achieve it?
>
>             Also when I submit the program using the Client
>             runBlocking method, although
>             I see in the taskmanager and jobmanager log that the
>             application has
>             finished, the runBlocking method does not return. Should I
>             call it in a
>             separate thread?
>
>             Cheers,
>             Theofilos
>
>             On 2016-06-10 22:12, Robert Metzger wrote:
>
>             Hi Theofilos,
>
>             how exactly are you writing the application output?
>             Are you using a logging framework?
>             Are you writing the log statements from the open(), map(),
>             invoke() methods
>             or from some constructors? (I'm asking since different
>             parts are executed on
>             the cluster and locally).
>
>             On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis
>             <tkak@kth.se <ma...@kth.se>> wrote:
>
>                 Hi all,
>
>                 Flink 1.0.3
>                 Hadoop 2.4.0
>
>                 When running a job on a Flink Cluster on Yarn, the
>                 application output is
>                 not included in the Yarn log. Instead, it is only
>                 printed in the stdout from
>                 where I run my program.  For the jobmanager, I'm using
>                 the log4j.properties
>                 file from the flink/conf directory. Yarn log
>                 aggregation is enabled and the
>                 YarnJobManager log is printed in the yarn log. The
>                 application is submitted
>                 by a Flink Client to the FlinkYarnCluster using a
>                 PackagedProgram.
>
>                 Is this expected behavior and if so, is there a way to
>                 include the
>                 application output in the Yarn aggregated log? Thanks!
>
>                 Cheers,
>                 Theofilos
>
>
>
>


Re: Application log on Yarn FlinkCluster

Posted by Maximilian Michels <mx...@apache.org>.
Please use the `yarn logs -applicationId <id>` to retrieve the logs. If you
have enabled log aggregation, this will give you all container logs
concatenated.

Cheers,
Max

On Wed, Jun 15, 2016 at 12:24 AM, Theofilos Kakantousis <tk...@kth.se> wrote:

> Hi Max,
>
> The runBlocking(..) problem was due to a Netty dependency issue on my
> project, it works fine now :)
>
> To pinpoint the logging issue, I just ran a single flink job on yarn as
> per the documentation "./bin/flink run -m yarn-cluster -yn 2
> ./examples/streaming/Iteration.jar" and I have the same issue.During the
> job I can see in the containers the taskmanager logs, and a sample output
> from the taskmanager.out is the following:
> "cat
> /srv/hadoop/logs/userlogs/application_1465901188070_0037/container_1465901188070_0037_01_000002/taskmanager.out
>
> 2> ((49,1),3)
> 2> ((25,11),4)
> 2> ((46,44),2
> .."
>
> However, the yarn aggregated log contains only the jobmanager output. Is
> this expected or could it indicate a problem with my hadoop logging
> configuration not picking up taskmanager logs?
>
> Cheers,
> Theofilos
>
>
> On 6/13/2016 12:13 PM, Maximilian Michels wrote:
>
>> Hi Theofilos,
>>
>> Flink doesn't send the local client output to the Yarn cluster. I
>> think this will only change once we move the entire execution of the
>> Job to the cluster framework. All output of the actual Flink job
>> should be within the JobManager or TaskManager logs.
>>
>> There is something wrong with the network communication if the Client
>> doesn't return from `runBlocking(..)`. Would be interesting to take a
>> look at the logs to find out why that could be.
>>
>> Cheers,
>> Max
>>
>>
>> On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis <tk...@kth.se>
>> wrote:
>>
>>> Hi Robert,
>>>
>>> Thanks for the prompt reply. I'm using the IterateExample from Flink
>>> examples. In the yarn log I get entries for the YarnJobManager and
>>> ExecutionGraph, but I was wondering if there is a way to push all the
>>> logging that the client produces into the yarn log. Including the
>>> System.out
>>> calls. Is there a way to modify the example to use a logging framework to
>>> achieve it?
>>>
>>> Also when I submit the program using the Client runBlocking method,
>>> although
>>> I see in the taskmanager and jobmanager log that the application has
>>> finished, the runBlocking method does not return. Should I call it in a
>>> separate thread?
>>>
>>> Cheers,
>>> Theofilos
>>>
>>> On 2016-06-10 22:12, Robert Metzger wrote:
>>>
>>> Hi Theofilos,
>>>
>>> how exactly are you writing the application output?
>>> Are you using a logging framework?
>>> Are you writing the log statements from the open(), map(), invoke()
>>> methods
>>> or from some constructors? (I'm asking since different parts are
>>> executed on
>>> the cluster and locally).
>>>
>>> On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tk...@kth.se>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Flink 1.0.3
>>>> Hadoop 2.4.0
>>>>
>>>> When running a job on a Flink Cluster on Yarn, the application output is
>>>> not included in the Yarn log. Instead, it is only printed in the stdout
>>>> from
>>>> where I run my program.  For the jobmanager, I'm using the
>>>> log4j.properties
>>>> file from the flink/conf directory. Yarn log aggregation is enabled and
>>>> the
>>>> YarnJobManager log is printed in the yarn log. The application is
>>>> submitted
>>>> by a Flink Client to the FlinkYarnCluster using a PackagedProgram.
>>>>
>>>> Is this expected behavior and if so, is there a way to include the
>>>> application output in the Yarn aggregated log? Thanks!
>>>>
>>>> Cheers,
>>>> Theofilos
>>>>
>>>>
>>>
>

Re: Application log on Yarn FlinkCluster

Posted by Theofilos Kakantousis <tk...@kth.se>.
Hi Max,

The runBlocking(..) problem was due to a Netty dependency issue on my 
project, it works fine now :)

To pinpoint the logging issue, I just ran a single flink job on yarn as 
per the documentation "./bin/flink run -m yarn-cluster -yn 2 
./examples/streaming/Iteration.jar" and I have the same issue.During the 
job I can see in the containers the taskmanager logs, and a sample 
output from the taskmanager.out is the following:
"cat 
/srv/hadoop/logs/userlogs/application_1465901188070_0037/container_1465901188070_0037_01_000002/taskmanager.out 

2> ((49,1),3)
2> ((25,11),4)
2> ((46,44),2
.."

However, the yarn aggregated log contains only the jobmanager output. Is 
this expected or could it indicate a problem with my hadoop logging 
configuration not picking up taskmanager logs?

Cheers,
Theofilos

On 6/13/2016 12:13 PM, Maximilian Michels wrote:
> Hi Theofilos,
>
> Flink doesn't send the local client output to the Yarn cluster. I
> think this will only change once we move the entire execution of the
> Job to the cluster framework. All output of the actual Flink job
> should be within the JobManager or TaskManager logs.
>
> There is something wrong with the network communication if the Client
> doesn't return from `runBlocking(..)`. Would be interesting to take a
> look at the logs to find out why that could be.
>
> Cheers,
> Max
>
>
> On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis <tk...@kth.se> wrote:
>> Hi Robert,
>>
>> Thanks for the prompt reply. I'm using the IterateExample from Flink
>> examples. In the yarn log I get entries for the YarnJobManager and
>> ExecutionGraph, but I was wondering if there is a way to push all the
>> logging that the client produces into the yarn log. Including the System.out
>> calls. Is there a way to modify the example to use a logging framework to
>> achieve it?
>>
>> Also when I submit the program using the Client runBlocking method, although
>> I see in the taskmanager and jobmanager log that the application has
>> finished, the runBlocking method does not return. Should I call it in a
>> separate thread?
>>
>> Cheers,
>> Theofilos
>>
>> On 2016-06-10 22:12, Robert Metzger wrote:
>>
>> Hi Theofilos,
>>
>> how exactly are you writing the application output?
>> Are you using a logging framework?
>> Are you writing the log statements from the open(), map(), invoke() methods
>> or from some constructors? (I'm asking since different parts are executed on
>> the cluster and locally).
>>
>> On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tk...@kth.se> wrote:
>>> Hi all,
>>>
>>> Flink 1.0.3
>>> Hadoop 2.4.0
>>>
>>> When running a job on a Flink Cluster on Yarn, the application output is
>>> not included in the Yarn log. Instead, it is only printed in the stdout from
>>> where I run my program.  For the jobmanager, I'm using the log4j.properties
>>> file from the flink/conf directory. Yarn log aggregation is enabled and the
>>> YarnJobManager log is printed in the yarn log. The application is submitted
>>> by a Flink Client to the FlinkYarnCluster using a PackagedProgram.
>>>
>>> Is this expected behavior and if so, is there a way to include the
>>> application output in the Yarn aggregated log? Thanks!
>>>
>>> Cheers,
>>> Theofilos
>>>
>>


Re: Application log on Yarn FlinkCluster

Posted by Maximilian Michels <mx...@apache.org>.
Hi Theofilos,

Flink doesn't send the local client output to the Yarn cluster. I
think this will only change once we move the entire execution of the
Job to the cluster framework. All output of the actual Flink job
should be within the JobManager or TaskManager logs.

There is something wrong with the network communication if the Client
doesn't return from `runBlocking(..)`. Would be interesting to take a
look at the logs to find out why that could be.

Cheers,
Max


On Sat, Jun 11, 2016 at 1:53 PM, Theofilos Kakantousis <tk...@kth.se> wrote:
> Hi Robert,
>
> Thanks for the prompt reply. I'm using the IterateExample from Flink
> examples. In the yarn log I get entries for the YarnJobManager and
> ExecutionGraph, but I was wondering if there is a way to push all the
> logging that the client produces into the yarn log. Including the System.out
> calls. Is there a way to modify the example to use a logging framework to
> achieve it?
>
> Also when I submit the program using the Client runBlocking method, although
> I see in the taskmanager and jobmanager log that the application has
> finished, the runBlocking method does not return. Should I call it in a
> separate thread?
>
> Cheers,
> Theofilos
>
> On 2016-06-10 22:12, Robert Metzger wrote:
>
> Hi Theofilos,
>
> how exactly are you writing the application output?
> Are you using a logging framework?
> Are you writing the log statements from the open(), map(), invoke() methods
> or from some constructors? (I'm asking since different parts are executed on
> the cluster and locally).
>
> On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tk...@kth.se> wrote:
>>
>> Hi all,
>>
>> Flink 1.0.3
>> Hadoop 2.4.0
>>
>> When running a job on a Flink Cluster on Yarn, the application output is
>> not included in the Yarn log. Instead, it is only printed in the stdout from
>> where I run my program.  For the jobmanager, I'm using the log4j.properties
>> file from the flink/conf directory. Yarn log aggregation is enabled and the
>> YarnJobManager log is printed in the yarn log. The application is submitted
>> by a Flink Client to the FlinkYarnCluster using a PackagedProgram.
>>
>> Is this expected behavior and if so, is there a way to include the
>> application output in the Yarn aggregated log? Thanks!
>>
>> Cheers,
>> Theofilos
>>
>
>

Re: Application log on Yarn FlinkCluster

Posted by Theofilos Kakantousis <tk...@kth.se>.
Hi Robert,

Thanks for the prompt reply. I'm using the IterateExample from Flink 
examples. In the yarn log I get entries for the YarnJobManager and 
ExecutionGraph, but I was wondering if there is a way to push all the 
logging that the client produces into the yarn log. Including the 
System.out calls. Is there a way to modify the example to use a logging 
framework to achieve it?

Also when I submit the program using the Client runBlocking method, 
although I see in the taskmanager and jobmanager log that the 
application has finished, the runBlocking method does not return. Should 
I call it in a separate thread?

Cheers,
Theofilos

On 2016-06-10 22:12, Robert Metzger wrote:
> Hi Theofilos,
>
> how exactly are you writing the application output?
> Are you using a logging framework?
> Are you writing the log statements from the open(), map(), invoke() 
> methods or from some constructors? (I'm asking since different parts 
> are executed on the cluster and locally).
>
> On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tkak@kth.se 
> <ma...@kth.se>> wrote:
>
>     Hi all,
>
>     Flink 1.0.3
>     Hadoop 2.4.0
>
>     When running a job on a Flink Cluster on Yarn, the application
>     output is not included in the Yarn log. Instead, it is only
>     printed in the stdout from where I run my program.  For the
>     jobmanager, I'm using the log4j.properties file from the
>     flink/conf directory. Yarn log aggregation is enabled and the
>     YarnJobManager log is printed in the yarn log. The application is
>     submitted by a Flink Client to the FlinkYarnCluster using a
>     PackagedProgram.
>
>     Is this expected behavior and if so, is there a way to include the
>     application output in the Yarn aggregated log? Thanks!
>
>     Cheers,
>     Theofilos
>
>


Re: Application log on Yarn FlinkCluster

Posted by Robert Metzger <rm...@apache.org>.
Hi Theofilos,

how exactly are you writing the application output?
Are you using a logging framework?
Are you writing the log statements from the open(), map(), invoke() methods
or from some constructors? (I'm asking since different parts are executed
on the cluster and locally).

On Fri, Jun 10, 2016 at 4:00 PM, Theofilos Kakantousis <tk...@kth.se> wrote:

> Hi all,
>
> Flink 1.0.3
> Hadoop 2.4.0
>
> When running a job on a Flink Cluster on Yarn, the application output is
> not included in the Yarn log. Instead, it is only printed in the stdout
> from where I run my program.  For the jobmanager, I'm using the
> log4j.properties file from the flink/conf directory. Yarn log aggregation
> is enabled and the YarnJobManager log is printed in the yarn log. The
> application is submitted by a Flink Client to the FlinkYarnCluster using a
> PackagedProgram.
>
> Is this expected behavior and if so, is there a way to include the
> application output in the Yarn aggregated log? Thanks!
>
> Cheers,
> Theofilos
>
>