You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Arvid Heise <ar...@ververica.com> on 2020/10/02 09:45:11 UTC
Re: SocketException: Too many open files
Hi Sateesh,
my suspicion would be that your custom Sink Function is leaking connections
(which also count for the file limit). Is there a reason that you cannot
use the ES connector of Flink?
I might have more ideas when you share your sink function.
Best,
Arvid
On Sun, Sep 27, 2020 at 7:16 PM mars <sk...@yahoo.com> wrote:
> Hi,
>
> I am using 1.10.0 version of Flink on EMR.
>
> I am not using the Default Flink Sink. I have a Sink Function on the
> Stream
> and with in the invoke function i am creating a Data Structure (VO) and
> putting it in the Map.
>
> The EMR Step function i am running is. a Spring based FLink Job and i have
> a scheduler which runs every min and looks for items in the Map and
> generates JSON based in the VO from the Map and send it to Elastic Search
> and removes it from the HashMap once it is sent to ES successfully.
>
> I am using M5.2x large for worker nodes and M5.4xlarge for Master Node
>
> I have set the ulimit to 500K for all users (*) . Both soft and hard limit
> on Master and worker nodes.
>
> Thanks again for your response.
>
> Sateesh
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Arvid Heise | Senior Java Developer
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Processing | Event Driven | Real Time
--
Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Toni) Cheng