You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Colin Patrick McCabe (Created) (JIRA)" <ji...@apache.org> on 2012/03/23 21:33:28 UTC

[jira] [Created] (HDFS-3134) harden edit log loader against malformed or malicious input

harden edit log loader against malformed or malicious input
-----------------------------------------------------------

                 Key: HDFS-3134
                 URL: https://issues.apache.org/jira/browse/HDFS-3134
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Colin Patrick McCabe
            Assignee: Colin Patrick McCabe


Currently, the edit log loader does not handle bad or malicious input sensibly.

We can often cause OutOfMemory exceptions, null pointer exceptions, or other unchecked exceptions to be thrown by feeding the edit log loader bad input.  In some environments, an out of memory error can cause the JVM process to be terminated.

It's clear that we want these exceptions to be thrown as IOException instead of as unchecked exceptions.  We also want to avoid out of memory situations.

The main task here is to put a sensible upper limit on the lengths of arrays and strings we allocate on command.  The other task is to try to avoid creating unchecked exceptions (by dereferencing potentially-NULL pointers, for example).  Instead, we should verify ahead of time and give a more sensible error message that reflects the problem with the input.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira