You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/05/29 08:51:04 UTC

[jira] [Resolved] (SPARK-20911) Unable to do windowing operation on Spark 2.1.1 and kafka 0.10.2.0

     [ https://issues.apache.org/jira/browse/SPARK-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-20911.
-------------------------------
    Resolution: Duplicate

> Unable to do windowing operation on Spark 2.1.1 and kafka 0.10.2.0
> ------------------------------------------------------------------
>
>                 Key: SPARK-20911
>                 URL: https://issues.apache.org/jira/browse/SPARK-20911
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 2.1.1
>            Reporter: Mustak
>
> Hi .. I'm unable to do windowing operation by using spark 2.1.1 and kafka 0.10.2.0. I'm getting the following error message in spark..
> 17/05/29 12:03:33 ERROR JobScheduler: Error running job streaming job 1496039609000 ms.0
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 1 times, most recent failure: Lost task 0.0 in stage 16.0 (TID 9, localhost, executor driver): java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
> 	at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1431)
> 	at org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1132)
> 	at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.seek(CachedKafkaConsumer.scala:95)
> 	at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:69)
> 	at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:228)
> 	at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:194)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
> 	at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:364)
> 	at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1021)
> 	at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:996)
> 	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:936)
> 	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:996)
> 	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:700)
> 	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
> 	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:99)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:748)
> I'm able to run the same program with saprk 1.6.2 and kafka 0.8 versions. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org