You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by SamyaMaiti <sa...@gmail.com> on 2014/12/30 23:19:39 UTC

Kafka + Spark streaming

Hi Experts,

Few general Queries : 

1. Can a single block/partition in a RDD have more than 1 kafka message? or
there will be one & only one kafka message per block? In a more broader way,
is the message count related to block in any way or its just that any
message received with in a particular block interval will go in the same
block.

2. If a worker goes down which runs the Receiver for Kafka, Will the
receiver be restarted on some other worker?

Regards,
Sam



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Spark-streaming-tp20914.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Kafka + Spark streaming

Posted by Samya Maiti <sa...@gmail.com>.
Thanks TD.

On Wed, Dec 31, 2014 at 7:19 AM, Tathagata Das <ta...@gmail.com>
wrote:

> 1. Of course, a single block / partition has many Kafka messages, and
> from different Kafka topics interleaved together. The message count is
> not related to the block count. Any message received within a
> particular block interval will go in the same block.
>
> 2. Yes, the receiver will be started on another worker.
>
> TD
>
>
> On Tue, Dec 30, 2014 at 2:19 PM, SamyaMaiti <sa...@gmail.com>
> wrote:
> > Hi Experts,
> >
> > Few general Queries :
> >
> > 1. Can a single block/partition in a RDD have more than 1 kafka message?
> or
> > there will be one & only one kafka message per block? In a more broader
> way,
> > is the message count related to block in any way or its just that any
> > message received with in a particular block interval will go in the same
> > block.
> >
> > 2. If a worker goes down which runs the Receiver for Kafka, Will the
> > receiver be restarted on some other worker?
> >
> > Regards,
> > Sam
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Spark-streaming-tp20914.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> > For additional commands, e-mail: user-help@spark.apache.org
> >
>

Re: Kafka + Spark streaming

Posted by Tathagata Das <ta...@gmail.com>.
1. Of course, a single block / partition has many Kafka messages, and
from different Kafka topics interleaved together. The message count is
not related to the block count. Any message received within a
particular block interval will go in the same block.

2. Yes, the receiver will be started on another worker.

TD


On Tue, Dec 30, 2014 at 2:19 PM, SamyaMaiti <sa...@gmail.com> wrote:
> Hi Experts,
>
> Few general Queries :
>
> 1. Can a single block/partition in a RDD have more than 1 kafka message? or
> there will be one & only one kafka message per block? In a more broader way,
> is the message count related to block in any way or its just that any
> message received with in a particular block interval will go in the same
> block.
>
> 2. If a worker goes down which runs the Receiver for Kafka, Will the
> receiver be restarted on some other worker?
>
> Regards,
> Sam
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Spark-streaming-tp20914.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org