You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "AnywalkerGISer (Jira)" <ji...@apache.org> on 2022/05/13 09:10:00 UTC

[jira] (SPARK-39176) Pyspark failed to serialize dates before 1970 in windows

    [ https://issues.apache.org/jira/browse/SPARK-39176 ]


    AnywalkerGISer deleted comment on SPARK-39176:
    ----------------------------------------

was (Author: JIRAUSER289430):
https://github.com/apache/spark/pull/36537

> Pyspark failed to serialize dates before 1970 in windows
> --------------------------------------------------------
>
>                 Key: SPARK-39176
>                 URL: https://issues.apache.org/jira/browse/SPARK-39176
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, Tests, Windows
>    Affects Versions: 3.0.1
>            Reporter: AnywalkerGISer
>            Priority: Major
>             Fix For: 3.0.1
>
>
> h3. Fix problems with pyspark in Windows
>  # Fixed datetime conversion to timestamp before 1970;
>  # Fixed datetime conversion when timestamp is negative;
>  # Adding a test script.
> h3. Pyspark has problems serializing pre-1970 times in Windows
> An exception occurs when executing the following code under Windows:
> {code:java}
> rdd = sc.parallelize([('a', datetime(1957, 1, 9, 0, 0)),
>                       ('b', datetime(2014, 1, 27, 0, 0))])
> df = spark.createDataFrame(rdd, ["id", "date"])
> df.show()
> df.printSchema()
> print(df.collect()){code}
> {code:java}
>   File "...\spark\python\lib\pyspark.zip\pyspark\sql\types.py", line 195, in toInternal
>     else time.mktime(dt.timetuple()))
> OverflowError: mktime argument out of range
> at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503)
>    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638)
>    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621)
>    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456)
>    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
>    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
>    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
>    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
>    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
>    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
>    at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
>    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
>    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
>    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
>    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>    at org.apache.spark.scheduler.Task.run(Task.scala:127)
>    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
>    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
>    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
>    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>    ... 1 more {code}
> _*and*_
> {code:java}
> File ...\spark\python\lib\pyspark.zip\pyspark\sql\types.py, in fromInternal:
> Line 207:   return datetime.datetime.fromtimestamp(ts // 1000000).replace(microsecond=ts % 1000000)
> OSError: [Errno 22] Invalid argument {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org