You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "AnywalkerGISer (Jira)" <ji...@apache.org> on 2022/05/13 08:18:00 UTC

[jira] [Created] (SPARK-39176) Fixed a problem with pyspark serializing pre-1970 datetime in windows

AnywalkerGISer created SPARK-39176:
--------------------------------------

             Summary: Fixed a problem with pyspark serializing pre-1970 datetime in windows
                 Key: SPARK-39176
                 URL: https://issues.apache.org/jira/browse/SPARK-39176
             Project: Spark
          Issue Type: Improvement
          Components: PySpark, Tests, Windows
    Affects Versions: 3.0.1
            Reporter: AnywalkerGISer
             Fix For: 3.0.1


h3. Fix problems with pyspark in Windowns
 # Fixed datetime conversion to timestamp before 1970;
 # Fixed datetime conversion when timestamp is negative;
 # Adding a test script.

h3. Pyspark has problems serializing pre-1970 times in Windowns

An exception occurs when executing the following code under Windowns:
{code:java}
rdd = sc.parallelize([('a', datetime(1957, 1, 9, 0, 0)),
                      ('b', datetime(2014, 1, 27, 0, 0))])
df = spark.createDataFrame(rdd, ["id", "date"])

df.show()
df.printSchema()

print(df.collect()){code}
{code:java}
  File "...\spark\python\lib\pyspark.zip\pyspark\sql\types.py", line 195, in toInternal
    else time.mktime(dt.timetuple()))
OverflowError: mktime argument out of range

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503)
   at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638)
   at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621)
   at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456)
   at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
   at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
   at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
   at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
   at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
   at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
   at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
   at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
   at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   at org.apache.spark.scheduler.Task.run(Task.scala:127)
   at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   ... 1 more {code}
_*and*_
{code:java}
File ...\spark\python\lib\pyspark.zip\pyspark\sql\types.py, in fromInternal:
Line 207:   return datetime.datetime.fromtimestamp(ts // 1000000).replace(microsecond=ts % 1000000)

OSError: [Errno 22] Invalid argument {code}
 

*After updating the code, the above code was run successfully!*

!image-2022-05-13-16-15-13-862.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org