You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by samsayiam <ha...@gmail.com> on 2016/05/18 21:04:23 UTC

Couldn't find leader offsets

I have seen questions posted about this on SO and on this list but haven't
seen a response that addresses my issue.  I am trying to create a direct
stream connection to a kafka topic but it fails with Couldn't find leader
offsets for Set(...).  If I run a kafka consumer I can read the topic but
can't do it with spark.  Can someone tell me where I'm going wrong here?

Test topic info:
vagrant@broker1$ ./bin/kafka-topics.sh --describe --zookeeper 10.30.3.2:2181
--topic footopic
Topic:footopic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: footopic	Partition: 0	Leader: 0	Replicas: 0	  Isr: 0

consuming from kafka:
vagrant@broker1$ bin/kafka-console-consumer.sh --zookeeper 10.30.3.2:2181
--from-beginning --topic footopic
this is a test
and so is this
goodbye

Attempting from spark:
spark-submit --class com.foo.Experiment --master local[*] --jars
/vagrant/spark-streaming-kafka-assembly_2.10-1.6.1.jar
/vagrant/spark-app-1.0-SNAPSHOT.jar 10.0.7.34:9092

...

Using kafkaparams: {auto.offset.reset=smallest,
metadata.broker.list=10.0.7.34:9092}
16/05/18 20:27:21 INFO utils.VerifiableProperties: Verifying properties
16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
auto.offset.reset is overridden to smallest
16/05/18 20:27:21 INFO utils.VerifiableProperties: Property group.id is
overridden to 
16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
zookeeper.connect is overridden to 
16/05/18 20:27:21 INFO consumer.SimpleConsumer: Reconnect due to socket
error: java.nio.channels.ClosedChannelException
Exception in thread "main" org.apache.spark.SparkException:
java.nio.channels.ClosedChannelException
org.apache.spark.SparkException: Couldn't find leader offsets for
Set([footopic,0])
...


Any help is appreciated.

Thanks,
ch.





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Couldn-t-find-leader-offsets-tp26978.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Couldn't find leader offsets

Posted by Colin Hall <ha...@gmail.com>.
Hey Cody, thanks for the response. I looked at connection as a possibility based on your advice and after a lot of digging found a couple of things mentioned on SO and kafka lists about name resolution causing issues.   I created an entry in /etc/hosts on the spark host to resolve the broker to its IP and that seemed to do the trick.

Thanks much.
ch.



> On May 19, 2016, at 8:19 AM, Cody Koeninger <co...@koeninger.org> wrote:
> 
> Looks like a networking issue to me.  Make sure you can connect to the
> broker on the specified host and port from the spark driver (and the
> executors too, for that matter)
> 
> On Wed, May 18, 2016 at 4:04 PM, samsayiam <ha...@gmail.com> wrote:
>> I have seen questions posted about this on SO and on this list but haven't
>> seen a response that addresses my issue.  I am trying to create a direct
>> stream connection to a kafka topic but it fails with Couldn't find leader
>> offsets for Set(...).  If I run a kafka consumer I can read the topic but
>> can't do it with spark.  Can someone tell me where I'm going wrong here?
>> 
>> Test topic info:
>> vagrant@broker1$ ./bin/kafka-topics.sh --describe --zookeeper 10.30.3.2:2181
>> --topic footopic
>> Topic:footopic  PartitionCount:1        ReplicationFactor:1     Configs:
>>        Topic: footopic Partition: 0    Leader: 0       Replicas: 0       Isr: 0
>> 
>> consuming from kafka:
>> vagrant@broker1$ bin/kafka-console-consumer.sh --zookeeper 10.30.3.2:2181
>> --from-beginning --topic footopic
>> this is a test
>> and so is this
>> goodbye
>> 
>> Attempting from spark:
>> spark-submit --class com.foo.Experiment --master local[*] --jars
>> /vagrant/spark-streaming-kafka-assembly_2.10-1.6.1.jar
>> /vagrant/spark-app-1.0-SNAPSHOT.jar 10.0.7.34:9092
>> 
>> ...
>> 
>> Using kafkaparams: {auto.offset.reset=smallest,
>> metadata.broker.list=10.0.7.34:9092}
>> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Verifying properties
>> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
>> auto.offset.reset is overridden to smallest
>> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property group.id is
>> overridden to
>> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
>> zookeeper.connect is overridden to
>> 16/05/18 20:27:21 INFO consumer.SimpleConsumer: Reconnect due to socket
>> error: java.nio.channels.ClosedChannelException
>> Exception in thread "main" org.apache.spark.SparkException:
>> java.nio.channels.ClosedChannelException
>> org.apache.spark.SparkException: Couldn't find leader offsets for
>> Set([footopic,0])
>> ...
>> 
>> 
>> Any help is appreciated.
>> 
>> Thanks,
>> ch.
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Couldn-t-find-leader-offsets-tp26978.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Couldn't find leader offsets

Posted by Cody Koeninger <co...@koeninger.org>.
Looks like a networking issue to me.  Make sure you can connect to the
broker on the specified host and port from the spark driver (and the
executors too, for that matter)

On Wed, May 18, 2016 at 4:04 PM, samsayiam <ha...@gmail.com> wrote:
> I have seen questions posted about this on SO and on this list but haven't
> seen a response that addresses my issue.  I am trying to create a direct
> stream connection to a kafka topic but it fails with Couldn't find leader
> offsets for Set(...).  If I run a kafka consumer I can read the topic but
> can't do it with spark.  Can someone tell me where I'm going wrong here?
>
> Test topic info:
> vagrant@broker1$ ./bin/kafka-topics.sh --describe --zookeeper 10.30.3.2:2181
> --topic footopic
> Topic:footopic  PartitionCount:1        ReplicationFactor:1     Configs:
>         Topic: footopic Partition: 0    Leader: 0       Replicas: 0       Isr: 0
>
> consuming from kafka:
> vagrant@broker1$ bin/kafka-console-consumer.sh --zookeeper 10.30.3.2:2181
> --from-beginning --topic footopic
> this is a test
> and so is this
> goodbye
>
> Attempting from spark:
> spark-submit --class com.foo.Experiment --master local[*] --jars
> /vagrant/spark-streaming-kafka-assembly_2.10-1.6.1.jar
> /vagrant/spark-app-1.0-SNAPSHOT.jar 10.0.7.34:9092
>
> ...
>
> Using kafkaparams: {auto.offset.reset=smallest,
> metadata.broker.list=10.0.7.34:9092}
> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Verifying properties
> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
> auto.offset.reset is overridden to smallest
> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property group.id is
> overridden to
> 16/05/18 20:27:21 INFO utils.VerifiableProperties: Property
> zookeeper.connect is overridden to
> 16/05/18 20:27:21 INFO consumer.SimpleConsumer: Reconnect due to socket
> error: java.nio.channels.ClosedChannelException
> Exception in thread "main" org.apache.spark.SparkException:
> java.nio.channels.ClosedChannelException
> org.apache.spark.SparkException: Couldn't find leader offsets for
> Set([footopic,0])
> ...
>
>
> Any help is appreciated.
>
> Thanks,
> ch.
>
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Couldn-t-find-leader-offsets-tp26978.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org