You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Mostafa Aghajani (Jira)" <ji...@apache.org> on 2022/01/31 19:17:00 UTC

[jira] [Work started] (BEAM-13777) confluent schema registry cache capacity

     [ https://issues.apache.org/jira/browse/BEAM-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Work on BEAM-13777 started by Mostafa Aghajani.
-----------------------------------------------
> confluent schema registry cache capacity
> ----------------------------------------
>
>                 Key: BEAM-13777
>                 URL: https://issues.apache.org/jira/browse/BEAM-13777
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-core
>            Reporter: Mostafa Aghajani
>            Assignee: Mostafa Aghajani
>            Priority: P2
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Change cache capacity should be specified as input parameter instead of default max integer. The usage can be quite different case by case and a default Integer max value can lead to error like this depending on the setup:
> {{Exception in thread "main" java.lang.OutOfMemoryError: Java heap space}}
> Some documentation link on the parameter: [https://docs.confluent.io/5.4.2/clients/confluent-kafka-dotnet/api/Confluent.SchemaRegistry.CachedSchemaRegistryClient.html#Confluent_SchemaRegistry_CachedSchemaRegistryClient_DefaultMaxCachedSchemas]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)