You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "James Clampffer (JIRA)" <ji...@apache.org> on 2016/03/21 16:18:25 UTC

[jira] [Created] (HDFS-10188) libhdfs++: Implement debug allocators

James Clampffer created HDFS-10188:
--------------------------------------

             Summary: libhdfs++: Implement debug allocators
                 Key: HDFS-10188
                 URL: https://issues.apache.org/jira/browse/HDFS-10188
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: James Clampffer
            Assignee: James Clampffer


I propose implementing a set of memory new/delete pairs with additional checking to detect double deletes, read-after-delete, and write-after-deletes to help debug resource ownership issues and prevent new ones from entering the library.

One of the most common issues we have is use-after-free issues.  The continuation pattern makes these really tricky to debug because by the time a segsegv is raised the context of what has caused the error is long gone.

The plan is to add allocators that can be turned on that can do the following, in order of runtime cost.
1: no-op, forward through to default new/delete
2: memset free'd memory to 0
3: implement operator new with mmap, lock that region of memory once it's been deleted; obviously this can't be left to run forever because the memory is never unmapped

This should also put some groundwork in place for implementing specialized allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)