You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2022/10/03 05:53:00 UTC

[jira] [Commented] (SPARK-40614) Job aborted due to stage failure: Task 165 in stage 292.0 failed 4 times, most recent failure: Lost task 165.3 in stage 292.0 (TID 122333) (x.x.x.x executor 0): java.lang.NullPointerException

    [ https://issues.apache.org/jira/browse/SPARK-40614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17612179#comment-17612179 ] 

Hyukjin Kwon commented on SPARK-40614:
--------------------------------------

[~Mar_zieh] Mind posting the reproducer please?

> Job aborted due to stage failure: Task 165 in stage 292.0 failed 4 times, most recent failure: Lost task 165.3 in stage 292.0 (TID 122333) (x.x.x.x executor 0): java.lang.NullPointerException
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-40614
>                 URL: https://issues.apache.org/jira/browse/SPARK-40614
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 3.1.2
>            Reporter: Marzieh
>            Priority: Major
>
> I have to group by Pyspark dataframe and then create one Id for each group. I could not use custom function in Window. So I have to use ApplyInPandas, not only do my work but also improve its speed. But, when I run the code, sometimes I have this error. Since ApplyInPandas is under Experiment; I am sure it is a bug. 
> Driver stacktrace:
>     at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
>     at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
>     at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
>     at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>     at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
>     at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
>     at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
>     at scala.Option.foreach(Option.scala:407)
>     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
>     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
>     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2261)
>     at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$1(RDD.scala:1020)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>     at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
>     at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:1018)
>     at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:854)
>     at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:68)
>     at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
>     at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>     at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>     at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
>     at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
>     at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>     at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
>     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
>     at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
>     at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
>     at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
>     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
>     at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
>     at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
>     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
>     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
>     at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
>     at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
>     at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
>     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
>     at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>     at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>     at py4j.Gateway.invoke(Gateway.java:282)
>     at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>     at py4j.commands.CallCommand.execute(CallCommand.java:79)
>     at py4j.GatewayConnection.run(GatewayConnection.java:238)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> Moreover, when I use 'count()' after ApplyInPandas, there is no error at all.
> Would you please solve this bug? 
>  
> Many Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org