You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Marke Builder <ma...@gmail.com> on 2018/11/21 09:10:43 UTC
Flink stream with RabbitMQ source: Set max "request" message amount
Hi,
we are using rabbitmq queue as streaming source.
Sometimes (if the queue contains a lot of messages) we get the follow ERROR:
ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl -
Failed to stop Container container_1541828054499_0284_01_000004when
stopping NMClientImpl
and sometimes:
Uncaught error from thread [flink-scheduler-1]: GC overhead limit exceeded,
shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for for
ActorSystem[flink]
java.lang.OutOfMemoryError: GC overhead limit exceeded
We think that the problem is that too many messages are consumed by flink.
Therefore, the question of whether there is a way to limit this.
Thanks!
Marke
Re: Flink stream with RabbitMQ source: Set max "request" message amount
Posted by vino yang <ya...@gmail.com>.
Hi Marke,
AFAIK, you can set *basic.qos* to limit the consumption rate, please read
this answer.[1]
I am not sure if Flink RabbitMQ connector lets you set this property. You
can check it.
Thanks, vino.
[1]:
https://stackoverflow.com/questions/19163021/rabbitmq-how-to-throttle-the-consumer/19163868#19163868
Marke Builder <ma...@gmail.com> 于2018年11月21日周三 下午5:10写道:
> Hi,
>
> we are using rabbitmq queue as streaming source.
> Sometimes (if the queue contains a lot of messages) we get the follow
> ERROR:
> ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl -
> Failed to stop Container container_1541828054499_0284_01_000004when
> stopping NMClientImpl
>
> and sometimes:
> Uncaught error from thread [flink-scheduler-1]: GC overhead limit
> exceeded, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled
> for for ActorSystem[flink]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
> We think that the problem is that too many messages are consumed by flink.
> Therefore, the question of whether there is a way to limit this.
>
> Thanks!
> Marke
>