You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2021/10/06 10:09:00 UTC
[jira] [Resolved] (SPARK-36919) Make BadRecordException
serializable
[ https://issues.apache.org/jira/browse/SPARK-36919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-36919.
----------------------------------
Fix Version/s: 3.1.3
3.2.0
3.0.4
Resolution: Fixed
Issue resolved by pull request 34167
[https://github.com/apache/spark/pull/34167]
> Make BadRecordException serializable
> ------------------------------------
>
> Key: SPARK-36919
> URL: https://issues.apache.org/jira/browse/SPARK-36919
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 3.2.0, 3.3.0, 3.2.1
> Reporter: Tianhan Hu
> Assignee: Tianhan Hu
> Priority: Minor
> Fix For: 3.0.4, 3.2.0, 3.1.3
>
>
> Migrating a Spark application from 2.4.x to 3.1.x and finding a difference in the exception chaining behavior. In a case of parsing a malformed CSV, where the root cause exception should beĀ {{Caused by: java.lang.RuntimeException: Malformed CSV record}}, only the top level exception is kept, and all lower level exceptions and root cause are lost. Thus, when we callĀ {{ExceptionUtils.getRootCause}} on the exception, we still get itself.
> The reason for the difference is that {{RuntimeException}} is wrapped in {{BadRecordException}}, which has unserializable fields. When we try to serialize the exception from tasks and deserialize from scheduler, the exception is lost.
> This PR makes unserializable fields of {{BadRecordException}} transient, so the rest of the exception could be serialized and deserialized properly.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org