You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Jared Stehler <ja...@intellifylearning.com> on 2017/10/18 18:32:40 UTC

SLF4j logging system gets clobbered?

I’m having an issue where I’ve got logging setup and functioning for my flink-mesos deployment, and works fine up to a point (the same point every time) where it seems to fall back to “defaults” and loses all of my configured filtering.

2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00008 has started.
2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-16] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager) as 697add78bd00fe7dc6a7aa60bc8d75fb. Current number of registered hosts is 39. Current number of alive task slots is 39.
2017-10-11 21:37:18.820 [flink-akka.actor.default-dispatcher-17] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager) as a6cff0f18d71aabfb3b112f5e2c36c2b. Current number of registered hosts is 40. Current number of alive task slots is 40.
2017-10-11 21:37:18.821 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00010 has started.
2017-10-11 21:39:04,371:6171(0x7f67fe9cd700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms

— here is where it turns over into default pattern layout ---
21:39:05.616 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager

21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.client.JobClient - Checking and uploading JAR files
21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager
21:39:09.788 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Submitting job 005b570ff2866023aa905f2bc850f7a3 (Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3).
21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Using restart strategy FailureRateRestartStrategy(failuresInterval=120000 msdelayInterval=1000 msmaxFailuresPerInterval=3) for 005b570ff2866023aa905f2bc850f7a3.
21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.r.e.ExecutionGraph - Job recovers via failover strategy: full graph restart
21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Running initialization on master for job Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3 (005b570ff2866023aa905f2bc850f7a3).
21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Successfully ran initialization on master in 0 ms.
21:39:09.791 [flink-akka.actor.default-dispatcher-4] WARN  o.a.f.configuration.Configuration - Config uses deprecated configuration key 'high-availability.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.maximum-failed-tasks, -1
21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.framework.role, '*'

The reason this is a vexing issue is that the app master then proceeds to dump megabytes of " o.a.f.c.GlobalConfiguration - Loading configuration property:” messages into the log, and I’m unable to filter them out.

My logback config is:

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="SENTRY" class="io.sentry.logback.SentryAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>

    <logger name="org.apache.flink.runtime.metrics.MetricRegistry" level="OFF" />
    <logger name="org.apache.kafka.clients.ClientUtils" level="OFF" />
    <logger name="org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler" level="OFF" />
    <logger name="org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase" level="OFF" />

    <logger name="org.apache.flink.configuration.GlobalConfiguration" level="WARN" />
    <logger name="org.apache.flink.runtime.checkpoint.CheckpointCoordinator" level="WARN" />

    <logger name="org.elasticsearch.client.transport" level="DEBUG" />

    <root level="INFO">
        <appender-ref ref="CONSOLE" />
        <appender-ref ref="SENTRY" />
    </root>
</configuration>



--
Jared Stehler
Chief Architect - Intellify Learning
o: 617.701.6330 x703




Re: SLF4j logging system gets clobbered?

Posted by Jared Stehler <ja...@intellifylearning.com>.
This is with Flink 1.3.2. I’ll paste the full contents of the flink lib dir below, as well as the maven shade config for our job jar.

lib/
-rw-r--r--  1 root root    62983 Oct 18 18:24 activation-1.1.jar
-rw-r--r--  1 root root    44925 Oct 18 18:24 apacheds-i18n-2.0.0-M15.jar
-rw-r--r--  1 root root   691479 Oct 18 18:24 apacheds-kerberos-codec-2.0.0-M15.jar
-rw-r--r--  1 root root    16560 Oct 18 18:24 api-asn1-api-1.0.0-M20.jar
-rw-r--r--  1 root root    79912 Oct 18 18:24 api-util-1.0.0-M20.jar
-rw-r--r--  1 root root   303139 Oct 18 18:24 avro-1.7.4.jar
-rw-r--r--  1 root root 11948376 Oct 18 18:24 aws-java-sdk-1.7.4.jar
-rw-r--r--  1 root root   188671 Oct 18 18:24 commons-beanutils-1.7.0.jar
-rw-r--r--  1 root root   206035 Oct 18 18:24 commons-beanutils-core-1.8.0.jar
-rw-r--r--  1 root root    52988 Oct 18 18:24 commons-cli-1.3.1.jar
-rw-r--r--  1 root root    46725 Oct 18 18:24 commons-codec-1.3.jar
-rw-r--r--  1 root root   588337 Oct 18 18:24 commons-collections-3.2.2.jar
-rw-r--r--  1 root root   241367 Oct 18 18:24 commons-compress-1.4.1.jar
-rw-r--r--  1 root root   298829 Oct 18 18:24 commons-configuration-1.6.jar
-rw-r--r--  1 root root   143602 Oct 18 18:24 commons-digester-1.8.jar
-rw-r--r--  1 root root   305001 Oct 18 18:24 commons-httpclient-3.1.jar
-rw-r--r--  1 root root   185140 Oct 18 18:24 commons-io-2.4.jar
-rw-r--r--  1 root root   284220 Oct 18 18:24 commons-lang-2.6.jar
-rw-r--r--  1 root root  1599627 Oct 18 18:24 commons-math3-3.1.1.jar
-rw-r--r--  1 root root   273370 Oct 18 18:24 commons-net-3.1.jar
-rw-r--r--  1 root root  2657422 Oct 18 18:24 curator-client-4.0.0.jar
-rw-r--r--  1 root root   307244 Oct 18 18:24 curator-framework-4.0.0.jar
-rw-r--r--  1 root root   294100 Oct 18 18:24 curator-recipes-4.0.0.jar
-rw-r--r--  1 root root     2740 Oct 18 18:26 flink-appmaster-1.0-SNAPSHOT.jar
-rw-r--r--  1 root root    20099 Oct 18 18:24 flink-connector-kafka-0.10_2.11-1.3.2.jar
-rw-r--r--  1 root root    29961 Oct 18 18:24 flink-connector-kafka-0.9_2.11-1.3.2.jar
-rw-r--r--  1 root root    78938 Oct 18 18:24 flink-connector-kafka-base_2.11-1.3.2.jar
-rw-r--r--  1 root root 73351238 Aug  3 12:10 flink-dist_2.11-1.3.2.jar
-rw-rw-r--  1 root root   100653 Aug  3 12:07 flink-python_2.11-1.3.2.jar
-rw-r--r--  1 root root 36638085 Aug  3 11:58 flink-shaded-hadoop2-uber-1.3.2.jar
-rw-r--r--  1 root root     7286 Oct 18 18:24 force-shading-1.3.2.jar
-rw-r--r--  1 root root   190432 Oct 18 18:24 gson-2.2.4.jar
-rw-r--r--  1 root root  2442625 Oct 18 18:24 guava-20.0.jar
-rw-r--r--  1 root root    17385 Oct 18 18:24 hadoop-annotations-2.7.2.jar
-rw-r--r--  1 root root    70685 Oct 18 18:24 hadoop-auth-2.7.2.jar
-rw-r--r--  1 root root   103119 Oct 18 18:24 hadoop-aws-2.7.2.jar
-rw-r--r--  1 root root  3443040 Oct 18 18:24 hadoop-common-2.7.2.jar
-rw-r--r--  1 root root  1475955 Oct 18 18:24 htrace-core-3.1.0-incubating.jar
-rw-r--r--  1 root root   424648 Oct 18 18:24 httpclient-4.2.jar
-rw-r--r--  1 root root   223282 Oct 18 18:24 httpcore-4.2.jar
-rw-r--r--  1 root root    55786 Oct 18 18:24 jackson-annotations-2.8.10.jar
-rw-r--r--  1 root root   282634 Oct 18 18:24 jackson-core-2.8.10.jar
-rw-r--r--  1 root root   232248 Oct 18 18:24 jackson-core-asl-1.9.13.jar
-rw-r--r--  1 root root  1242948 Oct 18 18:24 jackson-databind-2.8.10.jar
-rw-r--r--  1 root root    17883 Oct 18 18:24 jackson-jaxrs-1.8.3.jar
-rw-r--r--  1 root root   780664 Oct 18 18:24 jackson-mapper-asl-1.9.13.jar
-rw-r--r--  1 root root    32319 Oct 18 18:24 jackson-xc-1.8.3.jar
-rw-r--r--  1 root root    18490 Oct 18 18:24 java-xmlbuilder-0.4.jar
-rw-r--r--  1 root root   105134 Oct 18 18:24 jaxb-api-2.2.2.jar
-rw-r--r--  1 root root   890168 Oct 18 18:24 jaxb-impl-2.2.3-1.jar
-rw-r--r--  1 root root    16515 Oct 18 18:24 jcl-over-slf4j-1.7.25.jar
-rw-r--r--  1 root root   458739 Oct 18 18:24 jersey-core-1.9.jar
-rw-r--r--  1 root root   147952 Oct 18 18:24 jersey-json-1.9.jar
-rw-r--r--  1 root root   713089 Oct 18 18:24 jersey-server-1.9.jar
-rw-r--r--  1 root root   539735 Oct 18 18:24 jets3t-0.9.0.jar
-rw-r--r--  1 root root    67758 Oct 18 18:24 jettison-1.1.jar
-rw-r--r--  1 root root   539912 Oct 18 18:24 jetty-6.1.26.jar
-rw-r--r--  1 root root   177131 Oct 18 18:24 jetty-util-6.1.26.jar
-rw-r--r--  1 root root   625986 Oct 18 18:24 joda-time-2.9.1.jar
-rw-r--r--  1 root root    78175 Oct 18 18:24 jopt-simple-5.0.3.jar
-rw-r--r--  1 root root   185746 Oct 18 18:24 jsch-0.1.42.jar
-rw-r--r--  1 root root   100636 Oct 18 18:24 jsp-api-2.1.jar
-rw-r--r--  1 root root    33031 Oct 18 18:24 jsr305-3.0.0.jar
-rw-r--r--  1 root root     4596 Oct 18 18:24 jul-to-slf4j-1.7.25.jar
-rw-r--r--  1 root root  5642726 Oct 18 18:24 kafka_2.11-0.10.2.1.jar
-rw-r--r--  1 root root   951041 Oct 18 18:24 kafka-clients-0.10.2.1.jar
-rw-r--r--  1 root root    23645 Oct 18 18:24 log4j-over-slf4j-1.7.25.jar
-rw-r--r--  1 root root   309130 Oct 18 18:24 logback-classic-1.1.11.jar
-rw-r--r--  1 root root   475477 Oct 18 18:24 logback-core-1.1.11.jar
-rw-r--r--  1 root root   236880 Oct 18 18:24 lz4-1.3.0.jar
-rw-r--r--  1 root root    82123 Oct 18 18:24 metrics-core-2.2.0.jar
-rw-r--r--  1 root root  1540617 Oct 18 18:24 mongo-java-driver-3.3.0.jar
-rw-r--r--  1 root root  1330394 Oct 18 18:26 netty-3.10.5.Final.jar
-rw-r--r--  1 root root    29555 Oct 18 18:24 paranamer-2.3.jar
-rw-r--r--  1 root root   533455 Oct 18 18:24 protobuf-java-2.5.0.jar
-rw-r--r--  1 root root  5744974 Oct 18 18:24 scala-library-2.11.8.jar
-rw-r--r--  1 root root   423753 Oct 18 18:24 scala-parser-combinators_2.11-1.0.4.jar
-rw-r--r--  1 root root   150385 Oct 18 18:24 sentry-1.5.3.jar
-rw-r--r--  1 root root     9210 Oct 18 18:24 sentry-logback-1.5.3.jar
-rw-r--r--  1 root root   105112 Oct 18 18:24 servlet-api-2.5.jar
-rw-r--r--  1 root root    41203 Oct 18 18:24 slf4j-api-1.7.25.jar
-rw-r--r--  1 root root   995968 Oct 18 18:24 snappy-java-1.0.4.1.jar
-rw-r--r--  1 root root    23346 Oct 18 18:24 stax-api-1.0-2.jar
-rw-r--r--  1 root root    15010 Oct 18 18:24 xmlenc-0.52.jar
-rw-r--r--  1 root root    94672 Oct 18 18:24 xz-1.0.jar
-rw-r--r--  1 root root    74798 Oct 18 18:24 zkclient-0.10.jar
-rw-r--r--  1 root root   871369 Oct 18 18:24 zookeeper-3.4.10.jar


      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <dependencies>
          <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <version>1.2.7.RELEASE</version>
          </dependency>
        </dependencies>
        <executions>
          <!-- Run shade goal on package phase -->
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <artifactSet>
                <excludes>
                  <!-- This list contains all dependencies of flink-dist 
                    Everything else will be packaged into the fat-jar -->
                  <exclude>org.apache.flink:flink-annotations</exclude>
                  <exclude>org.apache.flink:flink-shaded-hadoop1</exclude>
                  <exclude>org.apache.flink:flink-shaded-hadoop2</exclude>
                  <exclude>org.apache.flink:flink-shaded-curator-recipes</exclude>
                  <exclude>org.apache.flink:flink-core</exclude>
                  <exclude>org.apache.flink:flink-java</exclude>

                  <exclude>org.apache.flink:flink-metrics-core</exclude>
                  <exclude>org.apache.flink:flink-scala_2.11</exclude>
                  <exclude>org.apache.flink:flink-runtime_2.11</exclude>
                  <exclude>org.apache.flink:flink-optimizer_2.11</exclude>
                  <exclude>org.apache.flink:flink-clients_2.11</exclude>
                  <exclude>org.apache.flink:flink-avro_2.11</exclude>
                  <exclude>org.apache.flink:flink-examples-batch_2.11</exclude>
                  <exclude>org.apache.flink:flink-examples-streaming_2.11</exclude>
                  <exclude>org.apache.flink:flink-streaming-java_2.11</exclude>
                  <exclude>org.apache.flink:flink-statebackend-rocksdb_2.11</exclude>

                  <!-- Also exclude very big transitive dependencies of Flink 
                    WARNING: You have to remove these excludes if your code relies on other versions 
                    of these dependencies. -->
                  <exclude>org.scala-lang:scala-library</exclude>
                  <exclude>org.scala-lang:scala-compiler</exclude>
                  <exclude>org.scala-lang:scala-reflect</exclude>

                  <exclude>ch.qos.logback:*</exclude>

                  <exclude>com.esotericsoftware.kryo:kryo</exclude>
                  <exclude>com.esotericsoftware.minlog:minlog</exclude>
                  <exclude>com.fasterxml.jackson.core:jackson-core</exclude>
                  <exclude>com.fasterxml.jackson.core:jackson-databind</exclude>
                  <exclude>com.fasterxml.jackson.core:jackson-annotations</exclude>
                  <exclude>com.github.scopt:scopt_*</exclude>
                  <exclude>com.google.inject:guice</exclude>
                  <exclude>com.google.protobuf:protobuf-java</exclude>
                  <exclude>com.sun.jersey:jersey-core</exclude>
                  <exclude>com.thoughtworks.paranamer:paranamer</exclude>
                  <exclude>com.twitter:chill_*</exclude>
                  <exclude>com.twitter:chill-java</exclude>
                  <exclude>com.twitter:chill-avro_*</exclude>
                  <exclude>com.twitter:chill-bijection_*</exclude>
                  <exclude>com.twitter:bijection-core_*</exclude>
                  <exclude>com.twitter:bijection-avro_*</exclude>
                  <exclude>com.typesafe:config</exclude>
                  <exclude>com.typesafe.akka:akka-actor_*</exclude>
                  <exclude>com.typesafe.akka:akka-remote_*</exclude>
                  <exclude>com.typesafe.akka:akka-slf4j_*</exclude>
                  <exclude>commons-beanutils:commons-beanutils</exclude>
                  <exclude>commons-collections:commons-collections</exclude>
                  <exclude>commons-cli:commons-cli</exclude>
                  <exclude>commons-daemon:commons-daemon</exclude>
                  <exclude>commons-digester:commons-digester</exclude>
                  <exclude>commons-fileupload:commons-fileupload</exclude>
                  <exclude>commons-io:commons-io</exclude>
                  <exclude>commons-lang:commons-lang</exclude>
                  <exclude>commons-logging:commons-logging</exclude>
                  <exclude>commons-net:commons-net</exclude>
                  <exclude>de.javakaffee:kryo-serializers</exclude>
                  <exclude>io.netty:netty</exclude>
                  <exclude>org.apache.avro:avro</exclude>
                  <exclude>org.apache.commons:commons-compress</exclude>
                  <exclude>org.apache.commons:commons-lang3</exclude>
                  <exclude>org.apache.commons:commons-math</exclude>
                  <exclude>org.apache.commons:commons-math3</exclude>
                  <exclude>org.apache.sling:org.apache.sling.commons.json</exclude>
                  <exclude>org.codehaus.jackson:jackson-core-asl</exclude>
                  <exclude>org.codehaus.jackson:jackson-mapper-asl</exclude>
                  <exclude>org.javassist:javassist</exclude>
                  <exclude>org.mongodb:mongo-java-driver</exclude>
                  <exclude>org.tukaani:xz</exclude>
                  <exclude>org.uncommons.maths:uncommons-maths</exclude>
                  <exclude>org.xerial.snappy:snappy-java</exclude>
                  <exclude>org.objenesis:objenesis</exclude>
                  <exclude>org.slf4j:slf4j-api</exclude>
                  <exclude>org.slf4j:log4j-over-slf4j</exclude>
                  <exclude>org.slf4j:slf4j-log4j12</exclude>
                  <exclude>junit:junit</exclude>
                  <exclude>log4j:log4j</exclude>
                  <exclude>stax:stax-api</exclude>

                  <!-- Exclude - brought in by intellify-api - this shades 
                    Flink 1.3's ASM 5.1 with ASM 5.0.3 -->
                  <exclude>com.jayway.jsonpath:json-path</exclude>
                  <exclude>net.minidev:json-smart</exclude>
                  <exclude>net.minidev:accessors-smart</exclude>
                  <exclude>org.ow2.asm:*</exclude>
                  <exclude>*:asm</exclude>
                  <!-- end ASM exclude -->

                </excludes>
              </artifactSet>
              <filters>
                <filter>
                  <artifact>org.apache.flink:*</artifact>
                  <excludes>
                    <!-- exclude shaded google but include shaded curator -->
                    <exclude>org/apache/flink/shaded/com/**</exclude>
                    <exclude>web-docs/**</exclude>
                  </excludes>
                </filter>
                <!-- exclude asm from various sources -->
                <filter>
                  <artifact>asm:*</artifact>
                  <excludes>
                    <exclude>org/objectweb/asm/**</exclude>
                  </excludes>
                </filter>
                <filter>
                  <artifact>org.glassfish.jersey.core:*</artifact>
                  <excludes>
                    <exclude>jersey/repackaged/org/objectweb/**</exclude>
                  </excludes>
                </filter>
                <!-- end exclude asm from various sources -->
                <filter>
                  <!-- Do not copy the signatures in the META-INF folder. 
                    Otherwise, this might cause SecurityExceptions when using the JAR. -->
                  <artifact>*:*</artifact>
                  <excludes>
                    <exclude>META-INF/*.SF</exclude>
                    <exclude>META-INF/*.DSA</exclude>
                    <exclude>META-INF/*.RSA</exclude>
                  </excludes>
                </filter>
              </filters>
              <relocations>
                <relocation>
                  <pattern>org.apache.commons.codec</pattern>
                  <shadedPattern>com.intellify.flink.shaded.org.apache.commons.codec</shadedPattern>
                </relocation>
                <relocation>
                  <pattern>org.apache.http</pattern>
                  <shadedPattern>com.intellify.flink.shaded.org.apache.http</shadedPattern>
                </relocation>
                <relocation>
                  <pattern>com.amazonaws</pattern>
                  <shadedPattern>com.intellify.flink.shaded.com.amazonaws</shadedPattern>
                </relocation>
              </relocations>
              <transformers combine.children="append">
                <transformer
                  implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                  <resource>META-INF/spring.handlers</resource>
                </transformer>
                <transformer
                  implementation="org.springframework.boot.maven.PropertiesMergingResourceTransformer">
                  <resource>META-INF/spring.factories</resource>
                </transformer>
                <transformer
                  implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                  <resource>META-INF/spring.schemas</resource>
                </transformer>
                <transformer
                  implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                  <resource>META-INF/spring.tooling</resource>
                </transformer>
              </transformers>
              <createDependencyReducedPom>false</createDependencyReducedPom>
              <shadedArtifactAttached>true</shadedArtifactAttached>
            </configuration>
          </execution>
        </executions>
      </plugin>
--
Jared Stehler
Chief Architect - Intellify Learning
o: 617.701.6330 x703



> On Oct 19, 2017, at 5:12 AM, Piotr Nowojski <pi...@data-artisans.com> wrote:
> 
> Hi,
> 
> What versions of Flink/logback are you using?
> 
> Have you read this: https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html#use-logback-when-running-flink-out-of-the-ide--from-a-java-application <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html#use-logback-when-running-flink-out-of-the-ide--from-a-java-application> ?
> Maybe this is an issue of having multiple logging tools and their configurations on the class path?
> 
> Piotrek
> 
>> On 18 Oct 2017, at 20:32, Jared Stehler <jared.stehler@intellifylearning.com <ma...@intellifylearning.com>> wrote:
>> 
>> I’m having an issue where I’ve got logging setup and functioning for my flink-mesos deployment, and works fine up to a point (the same point every time) where it seems to fall back to “defaults” and loses all of my configured filtering.
>> 
>> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00008 has started.
>> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-16] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager>) as 697add78bd00fe7dc6a7aa60bc8d75fb. Current number of registered hosts is 39. Current number of alive task slots is 39.
>> 2017-10-11 21:37:18.820 [flink-akka.actor.default-dispatcher-17] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager>) as a6cff0f18d71aabfb3b112f5e2c36c2b. Current number of registered hosts is 40. Current number of alive task slots is 40.
>> 2017-10-11 21:37:18.821 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00010 has started.
>> 2017-10-11 21:39:04,371:6171(0x7f67fe9cd700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms
>> 
>> — here is where it turns over into default pattern layout ---
>> 21:39:05.616 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
>> 
>> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.client.JobClient - Checking and uploading JAR files
>> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
>> 21:39:09.788 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Submitting job 005b570ff2866023aa905f2bc850f7a3 (Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3).
>> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Using restart strategy FailureRateRestartStrategy(failuresInterval=120000 msdelayInterval=1000 msmaxFailuresPerInterval=3) for 005b570ff2866023aa905f2bc850f7a3.
>> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.r.e.ExecutionGraph - Job recovers via failover strategy: full graph restart
>> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Running initialization on master for job Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3 (005b570ff2866023aa905f2bc850f7a3).
>> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Successfully ran initialization on master in 0 ms.
>> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] WARN  o.a.f.configuration.Configuration - Config uses deprecated configuration key 'high-availability.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
>> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
>> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
>> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.maximum-failed-tasks, -1
>> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.framework.role, '*'
>> 
>> The reason this is a vexing issue is that the app master then proceeds to dump megabytes of " o.a.f.c.GlobalConfiguration - Loading configuration property:” messages into the log, and I’m unable to filter them out.
>> 
>> My logback config is:
>> 
>> <?xml version="1.0" encoding="UTF-8"?>
>> <configuration debug="true">
>>     <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
>>         <encoder>
>>             <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
>>         </encoder>
>>     </appender>
>> 
>>     <appender name="SENTRY" class="io.sentry.logback.SentryAppender">
>>         <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
>>             <level>ERROR</level>
>>         </filter>
>>     </appender>
>> 
>>     <logger name="org.apache.flink.runtime.metrics.MetricRegistry" level="OFF" />
>>     <logger name="org.apache.kafka.clients.ClientUtils" level="OFF" />
>>     <logger name="org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler" level="OFF" />
>>     <logger name="org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase" level="OFF" />
>> 
>>     <logger name="org.apache.flink.configuration.GlobalConfiguration" level="WARN" />
>>     <logger name="org.apache.flink.runtime.checkpoint.CheckpointCoordinator" level="WARN" />
>> 
>>     <logger name="org.elasticsearch.client.transport" level="DEBUG" />
>> 
>>     <root level="INFO">
>>         <appender-ref ref="CONSOLE" />
>>         <appender-ref ref="SENTRY" />
>>     </root>
>> </configuration>
>> 
>> 
>> 
>> --
>> Jared Stehler
>> Chief Architect - Intellify Learning
>> o: 617.701.6330 x703
>> 
>> 
>> 
> 


Re: SLF4j logging system gets clobbered?

Posted by Piotr Nowojski <pi...@data-artisans.com>.
Hi,

What versions of Flink/logback are you using?

Have you read this: https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html#use-logback-when-running-flink-out-of-the-ide--from-a-java-application <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html#use-logback-when-running-flink-out-of-the-ide--from-a-java-application> ?
Maybe this is an issue of having multiple logging tools and their configurations on the class path?

Piotrek

> On 18 Oct 2017, at 20:32, Jared Stehler <ja...@intellifylearning.com> wrote:
> 
> I’m having an issue where I’ve got logging setup and functioning for my flink-mesos deployment, and works fine up to a point (the same point every time) where it seems to fall back to “defaults” and loses all of my configured filtering.
> 
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00008 has started.
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-16] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager>) as 697add78bd00fe7dc6a7aa60bc8d75fb. Current number of registered hosts is 39. Current number of alive task slots is 39.
> 2017-10-11 21:37:18.820 [flink-akka.actor.default-dispatcher-17] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager>) as a6cff0f18d71aabfb3b112f5e2c36c2b. Current number of registered hosts is 40. Current number of alive task slots is 40.
> 2017-10-11 21:37:18.821 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00010 has started.
> 2017-10-11 21:39:04,371:6171(0x7f67fe9cd700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms
> 
> — here is where it turns over into default pattern layout ---
> 21:39:05.616 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
> 
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.client.JobClient - Checking and uploading JAR files
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
> 21:39:09.788 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Submitting job 005b570ff2866023aa905f2bc850f7a3 (Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3).
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Using restart strategy FailureRateRestartStrategy(failuresInterval=120000 msdelayInterval=1000 msmaxFailuresPerInterval=3) for 005b570ff2866023aa905f2bc850f7a3.
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.r.e.ExecutionGraph - Job recovers via failover strategy: full graph restart
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Running initialization on master for job Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3 (005b570ff2866023aa905f2bc850f7a3).
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Successfully ran initialization on master in 0 ms.
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] WARN  o.a.f.configuration.Configuration - Config uses deprecated configuration key 'high-availability.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.maximum-failed-tasks, -1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.framework.role, '*'
> 
> The reason this is a vexing issue is that the app master then proceeds to dump megabytes of " o.a.f.c.GlobalConfiguration - Loading configuration property:” messages into the log, and I’m unable to filter them out.
> 
> My logback config is:
> 
> <?xml version="1.0" encoding="UTF-8"?>
> <configuration debug="true">
>     <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
>         <encoder>
>             <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
>         </encoder>
>     </appender>
> 
>     <appender name="SENTRY" class="io.sentry.logback.SentryAppender">
>         <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
>             <level>ERROR</level>
>         </filter>
>     </appender>
> 
>     <logger name="org.apache.flink.runtime.metrics.MetricRegistry" level="OFF" />
>     <logger name="org.apache.kafka.clients.ClientUtils" level="OFF" />
>     <logger name="org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler" level="OFF" />
>     <logger name="org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase" level="OFF" />
> 
>     <logger name="org.apache.flink.configuration.GlobalConfiguration" level="WARN" />
>     <logger name="org.apache.flink.runtime.checkpoint.CheckpointCoordinator" level="WARN" />
> 
>     <logger name="org.elasticsearch.client.transport" level="DEBUG" />
> 
>     <root level="INFO">
>         <appender-ref ref="CONSOLE" />
>         <appender-ref ref="SENTRY" />
>     </root>
> </configuration>
> 
> 
> 
> --
> Jared Stehler
> Chief Architect - Intellify Learning
> o: 617.701.6330 x703
> 
> 
> 


Re: SLF4j logging system gets clobbered?

Posted by Till Rohrmann <tr...@apache.org>.
Hi Jared,

this problem looks strange to me. Logback should not change its
configuration if not explicitly being tinkered around with it.

Could you quickly explain me how your mesos setup works? Are you submitting
the job via the Web UI? I'm just wondering because I see client side as
well as cluster side logging statements in your log snippet. It could also
be helpful to get access to the complete cluster logs (including the
client) in order to pinpoint the problem. Would that be possible?

Have you tried using a different logback version? Just to rule out that
this is a logback specific problem.

Concerning the verbose GlobalConfiguration logging, this could be related
to [1], which is fixed in the latest master.

[1] https://issues.apache.org/jira/browse/FLINK-7643

On Mon, Oct 23, 2017 at 10:17 AM, Piotr Nowojski <pi...@data-artisans.com>
wrote:

> Till could you take a look at this?
>
> Piotrek
>
> On 18 Oct 2017, at 20:32, Jared Stehler <jared.stehler@
> intellifylearning.com> wrote:
>
> I’m having an issue where I’ve got logging setup and functioning for my
> flink-mesos deployment, and works fine up to a point (the same point every
> time) where it seems to fall back to “defaults” and loses all of my
> configured filtering.
>
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-17] INFO
>  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  -
> TaskManager taskmanager-00008 has started.
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-16] INFO
>  org.apache.flink.runtime.instance.InstanceManager  - Registered
> TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-
> 201.us-west-2.compute.internal:31014/user/taskmanager) as
> 697add78bd00fe7dc6a7aa60bc8d75fb. Current number of registered hosts is
> 39. Current number of alive task slots is 39.
> 2017-10-11 21:37:18.820 [flink-akka.actor.default-dispatcher-17] INFO
>  org.apache.flink.runtime.instance.InstanceManager  - Registered
> TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-
> 201.us-west-2.compute.internal:31018/user/taskmanager) as
> a6cff0f18d71aabfb3b112f5e2c36c2b. Current number of registered hosts is
> 40. Current number of alive task slots is 40.
> 2017-10-11 21:37:18.821 [flink-akka.actor.default-dispatcher-17] INFO
>  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  -
> TaskManager taskmanager-00010 has started.
> 2017-10-11 21:39:04,371:6171(0x7f67fe9cd700):ZOO_WARN@
> zookeeper_interest@1570: Exceeded deadline by 13ms
>
> — here is where it turns over into default pattern layout ---
> *21:39:05.616 [nioEventLoopGroup-5-6] INFO
>  o.a.flink.runtime.blob.BlobClient - Blob client connecting to
> akka://flink/user/jobmanager*
>
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.client.JobClient
> - Checking and uploading JAR files
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient
> - Blob client connecting to akka://flink/user/jobmanager
> 21:39:09.788 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.m.r.c.MesosJobManager - Submitting job 005b570ff2866023aa905f2bc850f7a3
> (Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3).
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.m.r.c.MesosJobManager - Using restart strategy
> FailureRateRestartStrategy(failuresInterval=120000 msdelayInterval=1000
> msmaxFailuresPerInterval=3) for 005b570ff2866023aa905f2bc850f7a3.
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.r.e.ExecutionGraph - Job recovers via failover strategy: full graph
> restart
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.m.r.c.MesosJobManager - Running initialization on master for job
> Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3 (
> 005b570ff2866023aa905f2bc850f7a3).
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.m.r.c.MesosJobManager - Successfully ran initialization on master in
> 0 ms.
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] WARN
>  o.a.f.configuration.Configuration - Config uses deprecated configuration
> key 'high-availability.zookeeper.storageDir' instead of proper key
> 'high-availability.storageDir'
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.c.GlobalConfiguration - Loading configuration property:
> mesos.failover-timeout, 60
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.c.GlobalConfiguration - Loading configuration property:
> mesos.initial-tasks, 1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.c.GlobalConfiguration - Loading configuration property:
> mesos.maximum-failed-tasks, -1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO
>  o.a.f.c.GlobalConfiguration - Loading configuration property:
> mesos.resourcemanager.framework.role, '*'
>
> The reason this is a vexing issue is that the app master then proceeds to
> dump megabytes of " o.a.f.c.GlobalConfiguration - Loading configuration
> property:” messages into the log, and I’m unable to filter them out.
>
> My logback config is:
>
> <?xml version="1.0" encoding="UTF-8"?>
> <configuration debug="true">
>     <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
>         <encoder>
>             <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level
> %logger{60} %X{sourceThread} - %msg%n</pattern>
>         </encoder>
>     </appender>
>
>     <appender name="SENTRY" class="io.sentry.logback.SentryAppender">
>         <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
>             <level>ERROR</level>
>         </filter>
>     </appender>
>
>     <logger name="org.apache.flink.runtime.metrics.MetricRegistry"
> level="OFF" />
>     <logger name="org.apache.kafka.clients.ClientUtils" level="OFF" />
>     <logger name="org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler"
> level="OFF" />
>     <logger name="org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase"
> level="OFF" />
>
>     <logger name="org.apache.flink.configuration.GlobalConfiguration"
> level="WARN" />
>     <logger name="org.apache.flink.runtime.checkpoint.CheckpointCoordinator"
> level="WARN" />
>
>     <logger name="org.elasticsearch.client.transport" level="DEBUG" />
>
>     <root level="INFO">
>         <appender-ref ref="CONSOLE" />
>         <appender-ref ref="SENTRY" />
>     </root>
> </configuration>
>
>
>
> --
> Jared Stehler
> Chief Architect - Intellify Learning
> o: 617.701.6330 x703 <(617)%20701-6330>
>
>
>
>
>

Re: SLF4j logging system gets clobbered?

Posted by Piotr Nowojski <pi...@data-artisans.com>.
Till could you take a look at this?

Piotrek

> On 18 Oct 2017, at 20:32, Jared Stehler <ja...@intellifylearning.com> wrote:
> 
> I’m having an issue where I’ve got logging setup and functioning for my flink-mesos deployment, and works fine up to a point (the same point every time) where it seems to fall back to “defaults” and loses all of my configured filtering.
> 
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00008 has started.
> 2017-10-11 21:37:17.454 [flink-akka.actor.default-dispatcher-16] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31014/user/taskmanager>) as 697add78bd00fe7dc6a7aa60bc8d75fb. Current number of registered hosts is 39. Current number of alive task slots is 39.
> 2017-10-11 21:37:18.820 [flink-akka.actor.default-dispatcher-17] INFO  org.apache.flink.runtime.instance.InstanceManager  - Registered TaskManager at ip-10-80-54-201 (akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager <akka.tcp://flink@ip-10-80-54-201.us-west-2.compute.internal:31018/user/taskmanager>) as a6cff0f18d71aabfb3b112f5e2c36c2b. Current number of registered hosts is 40. Current number of alive task slots is 40.
> 2017-10-11 21:37:18.821 [flink-akka.actor.default-dispatcher-17] INFO  o.a.f.m.runtime.clusterframework.MesosFlinkResourceManager  - TaskManager taskmanager-00010 has started.
> 2017-10-11 21:39:04,371:6171(0x7f67fe9cd700):ZOO_WARN@zookeeper_interest@1570: Exceeded deadline by 13ms
> 
> — here is where it turns over into default pattern layout ---
> 21:39:05.616 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
> 
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.client.JobClient - Checking and uploading JAR files
> 21:39:09.322 [nioEventLoopGroup-5-6] INFO  o.a.flink.runtime.blob.BlobClient - Blob client connecting to akka://flink/user/jobmanager <akka://flink/user/jobmanager>
> 21:39:09.788 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Submitting job 005b570ff2866023aa905f2bc850f7a3 (Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3).
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Using restart strategy FailureRateRestartStrategy(failuresInterval=120000 msdelayInterval=1000 msmaxFailuresPerInterval=3) for 005b570ff2866023aa905f2bc850f7a3.
> 21:39:09.789 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.r.e.ExecutionGraph - Job recovers via failover strategy: full graph restart
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Running initialization on master for job Sa-As-2b-Submission-Join-V3 := demos-demo500--data-canvas-2-sa-qs-as-v3 (005b570ff2866023aa905f2bc850f7a3).
> 21:39:09.790 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.m.r.c.MesosJobManager - Successfully ran initialization on master in 0 ms.
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] WARN  o.a.f.configuration.Configuration - Config uses deprecated configuration key 'high-availability.zookeeper.storageDir' instead of proper key 'high-availability.storageDir'
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.failover-timeout, 60
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.initial-tasks, 1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.maximum-failed-tasks, -1
> 21:39:09.791 [flink-akka.actor.default-dispatcher-4] INFO  o.a.f.c.GlobalConfiguration - Loading configuration property: mesos.resourcemanager.framework.role, '*'
> 
> The reason this is a vexing issue is that the app master then proceeds to dump megabytes of " o.a.f.c.GlobalConfiguration - Loading configuration property:” messages into the log, and I’m unable to filter them out.
> 
> My logback config is:
> 
> <?xml version="1.0" encoding="UTF-8"?>
> <configuration debug="true">
>     <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
>         <encoder>
>             <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{60} %X{sourceThread} - %msg%n</pattern>
>         </encoder>
>     </appender>
> 
>     <appender name="SENTRY" class="io.sentry.logback.SentryAppender">
>         <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
>             <level>ERROR</level>
>         </filter>
>     </appender>
> 
>     <logger name="org.apache.flink.runtime.metrics.MetricRegistry" level="OFF" />
>     <logger name="org.apache.kafka.clients.ClientUtils" level="OFF" />
>     <logger name="org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler" level="OFF" />
>     <logger name="org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase" level="OFF" />
> 
>     <logger name="org.apache.flink.configuration.GlobalConfiguration" level="WARN" />
>     <logger name="org.apache.flink.runtime.checkpoint.CheckpointCoordinator" level="WARN" />
> 
>     <logger name="org.elasticsearch.client.transport" level="DEBUG" />
> 
>     <root level="INFO">
>         <appender-ref ref="CONSOLE" />
>         <appender-ref ref="SENTRY" />
>     </root>
> </configuration>
> 
> 
> 
> --
> Jared Stehler
> Chief Architect - Intellify Learning
> o: 617.701.6330 x703
> 
> 
>