You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Kenneth Knowles (Jira)" <ji...@apache.org> on 2022/01/31 18:56:00 UTC

[jira] [Updated] (BEAM-13777) confluent schema registry cache capacity

     [ https://issues.apache.org/jira/browse/BEAM-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kenneth Knowles updated BEAM-13777:
-----------------------------------
    Status: Open  (was: Triage Needed)

> confluent schema registry cache capacity
> ----------------------------------------
>
>                 Key: BEAM-13777
>                 URL: https://issues.apache.org/jira/browse/BEAM-13777
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-core
>            Reporter: Mostafa Aghajani
>            Assignee: Mostafa Aghajani
>            Priority: P2
>
> Change cache capacity should be specified as input parameter instead of default max integer. The usage can be quite different case by case and a default Integer max value can lead to error like this depending on the setup:
> {{Exception in thread "main" java.lang.OutOfMemoryError: Java heap space}}
> Some documentation link on the parameter: [https://docs.confluent.io/5.4.2/clients/confluent-kafka-dotnet/api/Confluent.SchemaRegistry.CachedSchemaRegistryClient.html#Confluent_SchemaRegistry_CachedSchemaRegistryClient_DefaultMaxCachedSchemas]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)