You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "ALIM TOPRAK FIRAT (Jira)" <ji...@apache.org> on 2021/02/02 09:56:00 UTC
[jira] [Created] (SPARK-34328) Cannot work with Spark
ALIM TOPRAK FIRAT created SPARK-34328:
-----------------------------------------
Summary: Cannot work with Spark
Key: SPARK-34328
URL: https://issues.apache.org/jira/browse/SPARK-34328
Project: Spark
Issue Type: Bug
Components: PySpark
Affects Versions: 3.0.1
Reporter: ALIM TOPRAK FIRAT
I have already googled this problem and tried the other suggestions. I don't even understand why SPARK is this hard to set up and work properly without any problems. I have already googled this problem and tried the other suggestions. I don't even understand why SPARK is this hard to set up and work properly without any problems.
I set up Spark to my ubuntu server. I use Python 3.6 with Spark 3.0.1 / Hadoop 2.7
The code that I'm trying to run is this, it's from a udemy lecture.
```
from pyspark import SparkConf, SparkContextimport collectionsimport py4j
conf = SparkConf().setMaster('local').setAppName('RatingsHist')sc = SparkContext(conf = conf)lines = sc.textFile("/home/toprak/Codes/spark/uDemyData/Friends/fakefriends.csv")
def parseLine(line): fields = line.split(',') age = int(fields[21]) numFriends = int(fields[3]) return (age, numFriends)
rdd = lines.map(parseLine)
totalsByAge = rdd.mapValues(lambda x: (x, 1)).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1]))averagesByAge = totalsByAge.mapValues(lambda x: x[0] / x[1])results = averagesByAge.collect()for result in results: print(result)
```When I try to run, it gives me:
```---------------------------------------------------------------------------Py4JJavaError Traceback (most recent call last)<ipython-input-8-47d1d28735c7> in <module> 1 totalsByAge = rdd.mapValues(lambda x: (x, 1)).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1])) 2 averagesByAge = totalsByAge.mapValues(lambda x: x[0] / x[1])----> 3 results = averagesByAge.collect() 4 for result in results: 5 print(result)
~/.local/lib/python3.6/site-packages/pyspark/rdd.py in collect(self) 887 """ 888 with SCCallSiteSync(self.context) as css:--> 889 sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 890 return list(_load_from_socket(sock_info, self._jrdd_deserializer)) 891
~/.local/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args) 1303 answer = self.gateway_client.send_command(command) 1304 return_value = get_return_value(-> 1305 answer, self.gateway_client, self.target_id, self.name) 1306 1307 for temp_arg in temp_args:
~/.local/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling \{0}{1}\{2}.\n".--> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 2, 192.168.1.27, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 605, in main process() File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 595, in process out_iter = func(split_index, iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 2596, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 2596, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 425, in func return f(iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 1946, in combineLocally merger.mergeValues(iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/shuffle.py", line 238, in mergeValues for k, v in iterator: File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/util.py", line 107, in wrapper return f(*args, **kwargs) File "<ipython-input-5-ae8b1b1c8ea5>", line 3, in parseLineIndexError: list index out of range
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209) at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2120) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2139) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2164) at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1004) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:388) at org.apache.spark.rdd.RDD.collect(RDD.scala:1003) at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:168) at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 605, in main process() File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/worker.py", line 595, in process out_iter = func(split_index, iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 2596, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 2596, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 425, in func return f(iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/rdd.py", line 1946, in combineLocally merger.mergeValues(iterator) File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/shuffle.py", line 238, in mergeValues for k, v in iterator: File "/home/toprak/.local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/util.py", line 107, in wrapper return f(*args, **kwargs) File "<ipython-input-5-ae8b1b1c8ea5>", line 3, in parseLineIndexError: list index out of range
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:503) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:638) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:621) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:456) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209) at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more
```
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org