You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by HenryCaiHaiying <gi...@git.apache.org> on 2016/06/28 05:34:37 UTC

[GitHub] kafka pull request #1563: KAFKA-3904: File descriptor leaking (Too many open...

GitHub user HenryCaiHaiying opened a pull request:

    https://github.com/apache/kafka/pull/1563

    KAFKA-3904: File descriptor leaking (Too many open files) for long ru\u2026

    \u2026nning stream process
    
    I noticed when my application was running for more than one day, I will get 'Too many open files' error.
    
    I used 'lsof' to list all the file descriptors used by the process, it's over 32K, but most of them belongs to the .lock file, e.g. a single lock file shows 2700 times.
    
    I looked at the code, I think the problem is in:
        FileChannel channel = new RandomAccessFile(lockFile, "rw").getChannel();
    Each time new RandomAccessFile is called, a new fd will be created.
    
    Fix this by caching the FileChannels we created so far.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HenryCaiHaiying/kafka fd

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/1563.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1563
    
----
commit bb68fb8b820e4a0baef0c549ea1a4a8cfc913187
Author: Henry Cai <hc...@pinterest.com>
Date:   2016-06-28T05:24:03Z

    KAFKA-3904: File descriptor leaking (Too many open files) for long running stream process
    
    I noticed when my application was running for more than one day, I will get 'Too many open files' error.
    
    I used 'lsof' to list all the file descriptors used by the process, it's over 32K, but most of them belongs to the .lock file, e.g. a single lock file shows 2700 times.
    
    I looked at the code, I think the problem is in:
        FileChannel channel = new RandomAccessFile(lockFile, "rw").getChannel();
    Each time new RandomAccessFile is called, a new fd will be created.
    
    Fix this by caching the FileChannels we created so far.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] kafka pull request #1563: KAFKA-3904: File descriptor leaking (Too many open...

Posted by HenryCaiHaiying <gi...@git.apache.org>.
Github user HenryCaiHaiying closed the pull request at:

    https://github.com/apache/kafka/pull/1563


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---