You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Pankaj (JIRA)" <ji...@apache.org> on 2018/03/16 18:35:00 UTC

[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.

    [ https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16402323#comment-16402323 ] 

Pankaj edited comment on FLINK-9009 at 3/16/18 6:34 PM:
--------------------------------------------------------

No, Is not related with Kafka. I have already tried and check the problem only occurs when we introduced  more parallelism and flink is writing to cassandra with two cluster. Lets say in my case I introduced parallelism =10 coz i have 10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 


was (Author: pmishra01):
No, Is not related with Kafka. I have already tried and check the problem only occurs when we introduced  more parallelism and flink is writing two cassandra with two cluster. Lets say in my case I introduced parallelism =10 coz i have 10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 

> Error| You are creating too many HashedWheelTimer instances.  HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-9009
>                 URL: https://issues.apache.org/jira/browse/FLINK-9009
>             Project: Flink
>          Issue Type: Bug
>         Environment: Pass platform: Openshit
>            Reporter: Pankaj
>            Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)