You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by John Smith <ja...@gmail.com> on 2020/02/24 21:47:15 UTC

Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy my
tasks.

The first 1 seems to deploy ok, but subsequent ones seem to this throw this
error. But The seem to work still.

javax.management.InstanceAlreadyExistsException:
kafka.consumer:type=app-info,id=consumer-2
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
at
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
at
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)

Re: Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Posted by Khachatryan Roman <kh...@gmail.com>.
Hi John,

Sorry for the late reply.

I'd assume that this is a separate issue.
Regarding the original one, I'm pretty sure it's
https://issues.apache.org/jira/browse/FLINK-8093

Regards,
Roman


On Wed, Feb 26, 2020 at 5:50 PM John Smith <ja...@gmail.com> wrote:

> Just curious is this the reason why also some jobs in the UI show their
> metrics and others do not?
>
> Looking at 2 jobs, one displays how may bytes in and out it has received.
> While another one show all zeros. But I know it's working though.
>
> On Wed, 26 Feb 2020 at 11:19, John Smith <ja...@gmail.com> wrote:
>
>> This is what I got from the logs.
>>
>> 2020-02-25 00:13:38,124 WARN  org.apache.kafka.common.utils.AppInfoParser
>>                   - Error registering AppInfo mbean
>> javax.management.InstanceAlreadyExistsException:
>> kafka.consumer:type=app-info,id=consumer-1
>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>> at
>> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>> at
>> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
>> at
>> org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.initializeConnections(KafkaPartitionDiscoverer.java:58)
>> at
>> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.open(AbstractPartitionDiscoverer.java:94)
>> at
>> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:505)
>> at
>> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>> at
>> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>> at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
>> at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>> at
>> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
>> at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>> at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>> at java.lang.Thread.run(Thread.java:748)
>>
>>
>>
>>
>> On Tue, 25 Feb 2020 at 15:50, John Smith <ja...@gmail.com> wrote:
>>
>>> Ok as soon as I can tomorrow.
>>>
>>> Thanks
>>>
>>> On Tue, 25 Feb 2020 at 11:51, Khachatryan Roman <
>>> khachatryan.roman@gmail.com> wrote:
>>>
>>>> Hi John,
>>>>
>>>> Seems like this is another instance of
>>>> https://issues.apache.org/jira/browse/FLINK-8093
>>>> Could you please provide the full stacktrace?
>>>>
>>>> Regards,
>>>> Roman
>>>>
>>>>
>>>> On Mon, Feb 24, 2020 at 10:48 PM John Smith <ja...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy
>>>>> my tasks.
>>>>>
>>>>> The first 1 seems to deploy ok, but subsequent ones seem to this throw
>>>>> this error. But The seem to work still.
>>>>>
>>>>> javax.management.InstanceAlreadyExistsException:
>>>>> kafka.consumer:type=app-info,id=consumer-2
>>>>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>>>>> at
>>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>>>>> at
>>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>>>>> at
>>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>>>>> at
>>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>>>>> at
>>>>> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>>>>> at
>>>>> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
>>>>> at
>>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
>>>>> at
>>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
>>>>> at
>>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
>>>>> at
>>>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
>>>>> at
>>>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)
>>>>>
>>>>

Re: Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Posted by John Smith <ja...@gmail.com>.
Just curious is this the reason why also some jobs in the UI show their
metrics and others do not?

Looking at 2 jobs, one displays how may bytes in and out it has received.
While another one show all zeros. But I know it's working though.

On Wed, 26 Feb 2020 at 11:19, John Smith <ja...@gmail.com> wrote:

> This is what I got from the logs.
>
> 2020-02-25 00:13:38,124 WARN  org.apache.kafka.common.utils.AppInfoParser
>                   - Error registering AppInfo mbean
> javax.management.InstanceAlreadyExistsException:
> kafka.consumer:type=app-info,id=consumer-1
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
> at
> org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.initializeConnections(KafkaPartitionDiscoverer.java:58)
> at
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.open(AbstractPartitionDiscoverer.java:94)
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:505)
> at
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
>
> On Tue, 25 Feb 2020 at 15:50, John Smith <ja...@gmail.com> wrote:
>
>> Ok as soon as I can tomorrow.
>>
>> Thanks
>>
>> On Tue, 25 Feb 2020 at 11:51, Khachatryan Roman <
>> khachatryan.roman@gmail.com> wrote:
>>
>>> Hi John,
>>>
>>> Seems like this is another instance of
>>> https://issues.apache.org/jira/browse/FLINK-8093
>>> Could you please provide the full stacktrace?
>>>
>>> Regards,
>>> Roman
>>>
>>>
>>> On Mon, Feb 24, 2020 at 10:48 PM John Smith <ja...@gmail.com>
>>> wrote:
>>>
>>>> Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy
>>>> my tasks.
>>>>
>>>> The first 1 seems to deploy ok, but subsequent ones seem to this throw
>>>> this error. But The seem to work still.
>>>>
>>>> javax.management.InstanceAlreadyExistsException:
>>>> kafka.consumer:type=app-info,id=consumer-2
>>>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>>>> at
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>>>> at
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>>>> at
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>>>> at
>>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>>>> at
>>>> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>>>> at
>>>> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
>>>> at
>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
>>>> at
>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
>>>> at
>>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
>>>> at
>>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
>>>> at
>>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)
>>>>
>>>

Re: Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Posted by John Smith <ja...@gmail.com>.
This is what I got from the logs.

2020-02-25 00:13:38,124 WARN  org.apache.kafka.common.utils.AppInfoParser
                - Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException:
kafka.consumer:type=app-info,id=consumer-1
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
at
org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
at
org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.initializeConnections(KafkaPartitionDiscoverer.java:58)
at
org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.open(AbstractPartitionDiscoverer.java:94)
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:505)
at
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
at
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:748)




On Tue, 25 Feb 2020 at 15:50, John Smith <ja...@gmail.com> wrote:

> Ok as soon as I can tomorrow.
>
> Thanks
>
> On Tue, 25 Feb 2020 at 11:51, Khachatryan Roman <
> khachatryan.roman@gmail.com> wrote:
>
>> Hi John,
>>
>> Seems like this is another instance of
>> https://issues.apache.org/jira/browse/FLINK-8093
>> Could you please provide the full stacktrace?
>>
>> Regards,
>> Roman
>>
>>
>> On Mon, Feb 24, 2020 at 10:48 PM John Smith <ja...@gmail.com>
>> wrote:
>>
>>> Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy
>>> my tasks.
>>>
>>> The first 1 seems to deploy ok, but subsequent ones seem to this throw
>>> this error. But The seem to work still.
>>>
>>> javax.management.InstanceAlreadyExistsException:
>>> kafka.consumer:type=app-info,id=consumer-2
>>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>>> at
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>>> at
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>>> at
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>>> at
>>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>>> at
>>> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>>> at
>>> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
>>> at
>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
>>> at
>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
>>> at
>>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
>>> at
>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
>>> at
>>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)
>>>
>>

Re: Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Posted by John Smith <ja...@gmail.com>.
Ok as soon as I can tomorrow.

Thanks

On Tue, 25 Feb 2020 at 11:51, Khachatryan Roman <kh...@gmail.com>
wrote:

> Hi John,
>
> Seems like this is another instance of
> https://issues.apache.org/jira/browse/FLINK-8093
> Could you please provide the full stacktrace?
>
> Regards,
> Roman
>
>
> On Mon, Feb 24, 2020 at 10:48 PM John Smith <ja...@gmail.com>
> wrote:
>
>> Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy my
>> tasks.
>>
>> The first 1 seems to deploy ok, but subsequent ones seem to this throw
>> this error. But The seem to work still.
>>
>> javax.management.InstanceAlreadyExistsException:
>> kafka.consumer:type=app-info,id=consumer-2
>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>> at
>> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>> at
>> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>> at
>> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
>> at
>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
>> at
>> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)
>>
>

Re: Getting javax.management.InstanceAlreadyExistsException when upgraded to 1.10

Posted by Khachatryan Roman <kh...@gmail.com>.
Hi John,

Seems like this is another instance of
https://issues.apache.org/jira/browse/FLINK-8093
Could you please provide the full stacktrace?

Regards,
Roman


On Mon, Feb 24, 2020 at 10:48 PM John Smith <ja...@gmail.com> wrote:

> Hi. Just upgraded to 1.10.0 And getting the bellow error when I deploy my
> tasks.
>
> The first 1 seems to deploy ok, but subsequent ones seem to this throw
> this error. But The seem to work still.
>
> javax.management.InstanceAlreadyExistsException:
> kafka.consumer:type=app-info,id=consumer-2
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:805)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:659)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:639)
> at
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.getConsumer(KafkaConsumerThread.java:477)
> at
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:167)
>