You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Subramanyam Ramanathan (Jira)" <ji...@apache.org> on 2019/12/18 11:34:00 UTC

[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.

    [ https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999056#comment-16999056 ] 

Subramanyam Ramanathan edited comment on FLINK-9009 at 12/18/19 11:33 AM:
--------------------------------------------------------------------------

Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running centos 7

I have 20 map transformations, each with it's own source and sink, and parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This happens even if I have not streamed any data.

My heap size was set to 1024M and I don't see any outOfMemory errors. I think the increase in memory usage is because flink uses the off heap memory which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 


was (Author: subbu-ramanathan107):
Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.


I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB vm, running centos 7


I have 20 map transformations, each with it's own source and sink, and parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This happens even if I have not streamed any data.


My heap size was set to 1024M and I don't see any outOfMemory errors. I think the increase in memory usage is because flink uses the off heap memory which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 

> Error| You are creating too many HashedWheelTimer instances.  HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-9009
>                 URL: https://issues.apache.org/jira/browse/FLINK-9009
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>         Environment: Pass platform: Openshift
>            Reporter: Pankaj
>            Priority: Major
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)