You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/30 22:16:40 UTC

[jira] [Resolved] (HADOOP-6820) RunJar fails executing thousands JARs within single JVM with error "Too many open files"

     [ https://issues.apache.org/jira/browse/HADOOP-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer resolved HADOOP-6820.
--------------------------------------

    Resolution: Fixed

This is a system tuning and/or JVM bug. Closing as won't fix.

> RunJar fails executing thousands JARs within single JVM with error "Too many open files"
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6820
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6820
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: util
>    Affects Versions: 0.20.2
>         Environment: OS:Linux, Linux-user limited by maximum number of open file descriptors (for example: ulimit -n shows 1024)
>            Reporter: Alexander Bondar
>            Priority: Minor
>         Attachments: HADOOP-6820.patch
>
>
> According to Sun JVM (up to 7) bug http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4167874 - The JarFile objects created by sun.net.www.protocol.jar.JarFileFactory never get garbage collected, even if the classloader that loaded them goes away.
> So, if linux-user has limitation on maximum number of open file descriptors (for example: ulimit -n shows 1024) and performs RunJar.main(...) over thousands of JARs that include other nested JARs (also loaded by ClassLoader) within single JVM, RunJar.main(...) throws following exception: java.lang.RuntimeException: java.io.FileNotFoundException: /some-file.txt (Too many open files)



--
This message was sent by Atlassian JIRA
(v6.2#6252)