You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2017/08/21 09:31:00 UTC
[jira] [Commented] (SPARK-21796) pyspark count failed in
python3.5.2
[ https://issues.apache.org/jira/browse/SPARK-21796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134949#comment-16134949 ]
Hyukjin Kwon commented on SPARK-21796:
--------------------------------------
Could you share your input file? I can't reproduce in the master
{code}
Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/08/21 18:28:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.3.0-SNAPSHOT
/_/
Using Python version 3.5.2 (default, Jul 2 2016 17:53:06)
SparkSession available as 'spark'.
>>> user_data = sc.textFile("*.md")
>>> user_data.count()
119
{code}
> pyspark count failed in python3.5.2
> -----------------------------------
>
> Key: SPARK-21796
> URL: https://issues.apache.org/jira/browse/SPARK-21796
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.1.1
> Environment: Python 3.5.2 Anaconda3 4.2.0
> Reporter: cen yuhai
>
> steps:
> {code}
> pyspark
> user_data = sc.textFile("/data/external_table/ods/table/dt=2017-08-17/hour=01/*.txt")
> user_data.count()
> {code}
> Exceptions:
> {code}
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
> File "/home/master/platform/spark/python/pyspark/worker.py", line 98, in main
> command = pickleSer._read_with_length(infile)
> File "/home/master/platform/spark/python/pyspark/serializers.py", line 164, in _read_with_length
> return self.loads(obj)
> File "/home/master/platform/spark/python/pyspark/serializers.py", line 419, in loads
> return pickle.loads(obj, encoding=encoding)
> EOFError: Ran out of input
> at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org