You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/10/14 10:40:24 UTC

[GitHub] [spark] anandchangediya commented on issue #21038: [SPARK-22968][DStream] Throw an exception on partition revoking issue

anandchangediya commented on issue #21038: [SPARK-22968][DStream] Throw an exception on partition revoking issue
URL: https://github.com/apache/spark/pull/21038#issuecomment-541604823
 
 
   @koeninger According to Kafka documentation
   
   `If all the consumer instances have the same consumer group, then the records will effectively be load-balanced over the consumer instances`
   This means I can have multiple consumers with same groupId which can help me to load balance my application and scale accordingly.
   I don't know why it is said "fundamentally wrong" to have multiple consumers with the same groupId in spark.
   So how can I achieve scalability to listen to a single partition and increase consumption rate with multiple spark consumers?
   Is this the spark design fault or any other way to achieve that which I am unaware of?
   
   @SehanRathnayake Any thoughts?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org