You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "pavan (JIRA)" <ji...@apache.org> on 2018/12/15 20:20:00 UTC

[jira] [Updated] (SPARK-26377) java.lang.IllegalStateException: No current assignment for partition

     [ https://issues.apache.org/jira/browse/SPARK-26377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

pavan updated SPARK-26377:
--------------------------
    Description: 
Hi,

   I am using sparkkafkaDirectStream with subscriberPattern with initial offsets for topics and a pattern. On running the SparkJob on the job server  i am getting the following exception.The job is terminated. Please let me know any quick resolution possible.

Kafka Params:

"bootstrap.servers" -> credentials.getBrokers,
 "key.deserializer" -> classOf[StringDeserializer],
 "value.deserializer" -> classOf[ByteArrayDeserializer],
 "enable.auto.commit" -> (false: java.lang.Boolean)

"group.id" -> "abc"

API:

KafkaUtils.createDirectStream(streamingContext, PreferConsistent, SubscribePattern[K, V](regexPattern, allKafkaParams, offsets), perPartitionConfig)

 

Error Log:

{ "duration": "33.523 secs", "classPath": "com.appiot.dataingestion.DataIngestionJob", "startTime": "2018-12-15T18:28:08.207Z", "context": "c~1d750906-1fa7-44f9-a258-04963ac53150~9dc097e3-bf0f-432c-9c27-68c41a4009cd", "result": \\{ "message": "java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.v1.iot.raw_timeseries-0", "errorClass": "java.lang.RuntimeException", "stack": "java.lang.RuntimeException: java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.v1.iot.raw_timeseries-0\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:269)\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.seek(SubscriptionState.java:294)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1249)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:160)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:159)\n\tat scala.collection.Iterator$class.foreach(Iterator.scala:893)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1336)\n\tat scala.collection.IterableLike$class.foreach(IterableLike.scala:72)\n\tat scala.collection.AbstractIterable.foreach(Iterable.scala:54)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern.onStart(ConsumerStrategy.scala:159)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:72)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:242)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)\n\tat scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)\n\tat scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)\n\tat scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)\n\tat scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)\n\tat scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)\n\tat scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\n\tat scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\n\tat scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\n\tat scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)\n\tat ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()\n\tat org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)\n\tat org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:182)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.SparkJobBase$class.runJob(SparkJob.scala:31)\n\tat com.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat com.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:594)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:808)\n" }

, "status": "ERROR", "jobId": "1a7a14b1-d21e-4f64-9037-97f1ff8ffeda", "contextId": "708bba57-c828-459c-b4f2-69c03a1d67c2" }

 

Thanks,

Pavan

 

 

  was:
Hi,

   I am using sparkkafkaDirectStream with subscriberPattern with initial offsets for topics and a pattern. On running the SparkJob on the job server  i am getting the following exception.The job is terminated. Please let me know any quick resolution possible.

Kafka Params:

"bootstrap.servers" -> credentials.getBrokers,
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[ByteArrayDeserializer],
"enable.auto.commit" -> (false: java.lang.Boolean)

"group.id" -> "abc"

API:

KafkaUtils.createDirectStream(streamingContext, PreferConsistent, SubscribePattern[K, V](regexPattern, allKafkaParams, offsets), perPartitionConfig)

 

Error Log:

{ "duration": "33.523 secs", "classPath": "com.sap.appiot.dataingestion.DataIngestionJob", "startTime": "2018-12-15T18:28:08.207Z", "context": "c~1d750906-1fa7-44f9-a258-04963ac53150~9dc097e3-bf0f-432c-9c27-68c41a4009cd", "result": \{ "message": "java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.sap.v1.iot.raw_timeseries-0", "errorClass": "java.lang.RuntimeException", "stack": "java.lang.RuntimeException: java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.sap.v1.iot.raw_timeseries-0\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:269)\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.seek(SubscriptionState.java:294)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1249)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:160)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:159)\n\tat scala.collection.Iterator$class.foreach(Iterator.scala:893)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1336)\n\tat scala.collection.IterableLike$class.foreach(IterableLike.scala:72)\n\tat scala.collection.AbstractIterable.foreach(Iterable.scala:54)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern.onStart(ConsumerStrategy.scala:159)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:72)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:242)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)\n\tat scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)\n\tat scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)\n\tat scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)\n\tat scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)\n\tat scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)\n\tat scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\n\tat scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\n\tat scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\n\tat scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)\n\tat ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()\n\tat org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)\n\tat org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:182)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.SparkJobBase$class.runJob(SparkJob.scala:31)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:594)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:808)\n" }, "status": "ERROR", "jobId": "1a7a14b1-d21e-4f64-9037-97f1ff8ffeda", "contextId": "708bba57-c828-459c-b4f2-69c03a1d67c2" }

 

Thanks,

Pavan

 

 


> java.lang.IllegalStateException: No current assignment for partition
> --------------------------------------------------------------------
>
>                 Key: SPARK-26377
>                 URL: https://issues.apache.org/jira/browse/SPARK-26377
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 2.2.1
>            Reporter: pavan
>            Priority: Critical
>             Fix For: 2.2.1
>
>
> Hi,
>    I am using sparkkafkaDirectStream with subscriberPattern with initial offsets for topics and a pattern. On running the SparkJob on the job server  i am getting the following exception.The job is terminated. Please let me know any quick resolution possible.
> Kafka Params:
> "bootstrap.servers" -> credentials.getBrokers,
>  "key.deserializer" -> classOf[StringDeserializer],
>  "value.deserializer" -> classOf[ByteArrayDeserializer],
>  "enable.auto.commit" -> (false: java.lang.Boolean)
> "group.id" -> "abc"
> API:
> KafkaUtils.createDirectStream(streamingContext, PreferConsistent, SubscribePattern[K, V](regexPattern, allKafkaParams, offsets), perPartitionConfig)
>  
> Error Log:
> { "duration": "33.523 secs", "classPath": "com.appiot.dataingestion.DataIngestionJob", "startTime": "2018-12-15T18:28:08.207Z", "context": "c~1d750906-1fa7-44f9-a258-04963ac53150~9dc097e3-bf0f-432c-9c27-68c41a4009cd", "result": \\{ "message": "java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.v1.iot.raw_timeseries-0", "errorClass": "java.lang.RuntimeException", "stack": "java.lang.RuntimeException: java.lang.IllegalStateException: No current assignment for partition com-cibigdata2.v1.iot.raw_timeseries-0\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:269)\n\tat org.apache.kafka.clients.consumer.internals.SubscriptionState.seek(SubscriptionState.java:294)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1249)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:160)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern$$anonfun$onStart$4.apply(ConsumerStrategy.scala:159)\n\tat scala.collection.Iterator$class.foreach(Iterator.scala:893)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1336)\n\tat scala.collection.IterableLike$class.foreach(IterableLike.scala:72)\n\tat scala.collection.AbstractIterable.foreach(Iterable.scala:54)\n\tat org.apache.spark.streaming.kafka010.SubscribePattern.onStart(ConsumerStrategy.scala:159)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:72)\n\tat org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:242)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat org.apache.spark.streaming.DStreamGraph$$anonfun$start$7.apply(DStreamGraph.scala:54)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)\n\tat scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)\n\tat scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)\n\tat scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)\n\tat scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)\n\tat scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)\n\tat scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)\n\tat scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)\n\tat scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\n\tat scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\n\tat scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\n\tat scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)\n\tat ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()\n\tat org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)\n\tat org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:182)\n\tat com.sap.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.SparkJobBase$class.runJob(SparkJob.scala:31)\n\tat com.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat com.appiot.dataingestion.DataIngestionJob$.runJob(DataIngestionJob.scala:24)\n\tat spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:594)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)\n\tat scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:808)\n" }
> , "status": "ERROR", "jobId": "1a7a14b1-d21e-4f64-9037-97f1ff8ffeda", "contextId": "708bba57-c828-459c-b4f2-69c03a1d67c2" }
>  
> Thanks,
> Pavan
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org