You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cédric Chantepie (Jira)" <ji...@apache.org> on 2021/09/14 15:16:00 UTC

[jira] [Updated] (SPARK-36756) Spark uncaught exception handler is using logError

     [ https://issues.apache.org/jira/browse/SPARK-36756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Cédric Chantepie updated SPARK-36756:
-------------------------------------
    Description: 
Spark is [setting up an handler|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L915] to catch the uncaught exception.

This [handler itself catch any subsequent exception|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala#L64] that can happen while reporting the initially uncaught exception.

Issue is that if the subsequent exception is due to a log4j issue, as the `catch` there is also using `logError`, it will loop, not displaying any exception in the logs, and exiting with code 54.

e.g. We got an log4j issue when using logstash JSON logging :

{noformat}
at net.logstash.log4j.JSONEventLayoutV1.format(JSONEventLayoutV1.java:137)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576)
at org.apache.spark.internal.Logging.logError(Logging.scala:94)
at org.apache.spark.internal.Logging.logError$(Logging.scala:93)
at org.apache.spark.util.SparkUncaughtExceptionHandler.logError(SparkUncaughtExceptionHandler.scala:28)
at org.apache.spark.util.SparkUncaughtExceptionHandler.uncaughtException(SparkUncaughtExceptionHandler.scala:37)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
at java.lang.Thread.dispatchUncaughtException(Thread.java:1959)
{noformat}

*Suggested fix:*

Directly using `println` and `printStackTrace` as safe fallback in this `catch`.

  was:
Spark is [setting up an handler|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L915] to catch the uncaught exception.

This [handler itself catch any subsequent exception|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala#L64] that can happen while reporting the initially uncaught exception.

Issue is that if the subsequent exception is due to a log4j issue, as the `catch` there is also using `logError`, it will loop, not displaying any exception in the logs, and exiting with code 54.

e.g. We got an log4j issue when using logstash JSON logging :

```
at net.logstash.log4j.JSONEventLayoutV1.format(JSONEventLayoutV1.java:137)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576)
at org.apache.spark.internal.Logging.logError(Logging.scala:94)
at org.apache.spark.internal.Logging.logError$(Logging.scala:93)
at org.apache.spark.util.SparkUncaughtExceptionHandler.logError(SparkUncaughtExceptionHandler.scala:28)
at org.apache.spark.util.SparkUncaughtExceptionHandler.uncaughtException(SparkUncaughtExceptionHandler.scala:37)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
at java.lang.Thread.dispatchUncaughtException(Thread.java:1959)
```

*Suggested fix:*

Directly using `println` and `printStackTrace` as safe fallback in this `catch`.


> Spark uncaught exception handler is using logError
> --------------------------------------------------
>
>                 Key: SPARK-36756
>                 URL: https://issues.apache.org/jira/browse/SPARK-36756
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.0.4
>            Reporter: Cédric Chantepie
>            Priority: Blocker
>
> Spark is [setting up an handler|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L915] to catch the uncaught exception.
> This [handler itself catch any subsequent exception|https://github.com/apache/spark/blob/v3.0.1/core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala#L64] that can happen while reporting the initially uncaught exception.
> Issue is that if the subsequent exception is due to a log4j issue, as the `catch` there is also using `logError`, it will loop, not displaying any exception in the logs, and exiting with code 54.
> e.g. We got an log4j issue when using logstash JSON logging :
> {noformat}
> at net.logstash.log4j.JSONEventLayoutV1.format(JSONEventLayoutV1.java:137)
> at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
> at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
> at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
> at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
> at org.apache.log4j.Category.callAppenders(Category.java:206)
> at org.apache.log4j.Category.forcedLog(Category.java:391)
> at org.apache.log4j.Category.log(Category.java:856)
> at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576)
> at org.apache.spark.internal.Logging.logError(Logging.scala:94)
> at org.apache.spark.internal.Logging.logError$(Logging.scala:93)
> at org.apache.spark.util.SparkUncaughtExceptionHandler.logError(SparkUncaughtExceptionHandler.scala:28)
> at org.apache.spark.util.SparkUncaughtExceptionHandler.uncaughtException(SparkUncaughtExceptionHandler.scala:37)
> at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
> at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
> at java.lang.Thread.dispatchUncaughtException(Thread.java:1959)
> {noformat}
> *Suggested fix:*
> Directly using `println` and `printStackTrace` as safe fallback in this `catch`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org