You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "geekyouth (Jira)" <ji...@apache.org> on 2021/07/09 09:37:00 UTC

[jira] [Commented] (SPARK-36069) spark function from_json should output field name, field type and field value when FAILFAST mode throw exception

    [ https://issues.apache.org/jira/browse/SPARK-36069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377968#comment-17377968 ] 

geekyouth commented on SPARK-36069:
-----------------------------------

here is my unit test output:

 

org.apache.spark.SparkException: Malformed records are detected in record parsing. Parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.org.apache.spark.SparkException: Malformed records are detected in record parsing. Parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'. at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:70) at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:597) at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:461) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.subExpr_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:341) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

 

Caused by: org.apache.spark.sql.catalyst.util.BadRecordException: java.lang.RuntimeException: Cannot parse 0.31 as double. at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:478) at org.apache.spark.sql.catalyst.expressions.JsonToStructs.$anonfun$parser$3(jsonExpressions.scala:585) at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:60) ... 20 more

 

 

> spark function from_json should output field name, field type and field value when FAILFAST mode throw exception
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-36069
>                 URL: https://issues.apache.org/jira/browse/SPARK-36069
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: geekyouth
>            Priority: Major
>
> spark function from_json outputs error message when FAILFAST mode throw exception.
>  
> But the message does not contain important info exemaple: field name, field vlue , field type...
>  
> This  infoormation is very important for devlops to find where error input data is located.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org