You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2021/12/13 00:08:00 UTC

[jira] [Resolved] (SPARK-37598) Pyspark's newAPIHadoopRDD() method fails with ShortWritables

     [ https://issues.apache.org/jira/browse/SPARK-37598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-37598.
----------------------------------
    Fix Version/s: 3.3.0
       Resolution: Fixed

Issue resolved by pull request 34838
[https://github.com/apache/spark/pull/34838]

> Pyspark's newAPIHadoopRDD() method fails with ShortWritables
> ------------------------------------------------------------
>
>                 Key: SPARK-37598
>                 URL: https://issues.apache.org/jira/browse/SPARK-37598
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.4.8, 3.0.3, 3.1.2, 3.2.0
>            Reporter: Keith Massey
>            Assignee: Keith Massey
>            Priority: Minor
>             Fix For: 3.3.0
>
>
> If sc. newAPIHadoopRDD() is called from Pyspark using an InputFormat that has a ShortWritable as a field, then the call to newAPIHadoopRDD() fails. The reason is that shortWritable is not explicitly handled by PythonHadoopUtil the way that other numeric writables are (like LongWritable). The result is that the ShortWritable is not converted to an object that can be serialized by spark, and a serialization error occurs. Below is an example stack trace from within the pyspark shell:
> {code:java}
> >>> rdd = sc.newAPIHadoopRDD(inputFormatClass="[org.elasticsearch.hadoop.mr|http://org.elasticsearch.hadoop.mr/].EsInputFormat",
> ...             keyClass="[org.apache.hadoop.io|http://org.apache.hadoop.io/].NullWritable",
> ...             valueClass="[org.elasticsearch.hadoop.mr|http://org.elasticsearch.hadoop.mr/].LinkedMapWritable",
> ...             conf=conf)
> 2021-12-08 14:38:40,439 ERROR scheduler.TaskSetManager: task 0.0 in stage 15.0 (TID 31) had a not serializable result: org.apache.hadoop.io.ShortWritable
> Serialization stack:
> - object not serializable (class: [org.apache.hadoop.io|http://org.apache.hadoop.io/].ShortWritable, value: 1)
> - writeObject data (class: java.util.HashMap)
> - object (class java.util.HashMap, \{price=1})
> - field (class: scala.Tuple2, name: _2, type: class java.lang.Object)
> - object (class scala.Tuple2, (1,\{price=1}))
> - element of array (index: 0)
> - array (class [Lscala.Tuple2;, size 1); not retrying
> Traceback (most recent call last):
>  File "<stdin>", line 4, in <module>
>  File "/home/hduser/spark-3.1.2-bin-hadoop3.2/python/pyspark/context.py", line 853, in newAPIHadoopRDD
>   jconf, batchSize)
>  File "/home/hduser/spark-3.1.2-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
>  File "/home/hduser/spark-3.1.2-bin-hadoop3.2/python/pyspark/sql/utils.py", line 111, in deco
>   return f(*a, **kw)
>  File "/home/hduser/spark-3.1.2-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
> : org.apache.spark.SparkException: Job aborted due to stage failure: task 0.0 in stage 15.0 (TID 31) had a not serializable result: org.apache.hadoop.io.ShortWritable
> Serialization stack:
> - object not serializable (class: [org.apache.hadoop.io|http://org.apache.hadoop.io/].ShortWritable, value: 1)
> - writeObject data (class: java.util.HashMap)
> - object (class java.util.HashMap, \{price=1})
> - field (class: scala.Tuple2, name: _2, type: class java.lang.Object)
> - object (class scala.Tuple2, (1,\{price=1}))
> - element of array (index: 0)
> - array (class [Lscala.Tuple2;, size 1)
> at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
> at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
> at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
> at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
> at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
> at scala.Option.foreach(Option.scala:407)
> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
> at org.apache.spark.rdd.RDD.$anonfun$take$1(RDD.scala:1449)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
> at org.apache.spark.rdd.RDD.take(RDD.scala:1422)
> at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:173)
> at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:385)
> at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:282)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:238)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org