You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@storm.apache.org by "P. Taylor Goetz (JIRA)" <ji...@apache.org> on 2018/09/21 17:30:00 UTC

[jira] [Updated] (STORM-2745) Hdfs Open Files problem

     [ https://issues.apache.org/jira/browse/STORM-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

P. Taylor Goetz updated STORM-2745:
-----------------------------------
    Fix Version/s:     (was: 1.x)
                       (was: 2.0.0)

> Hdfs Open Files problem
> -----------------------
>
>                 Key: STORM-2745
>                 URL: https://issues.apache.org/jira/browse/STORM-2745
>             Project: Apache Storm
>          Issue Type: New Feature
>          Components: storm-hdfs
>    Affects Versions: 2.0.0, 1.x
>            Reporter: Shoeb
>            Priority: Major
>              Labels: features, pull-request-available, starter
>   Original Estimate: 48h
>          Time Spent: 50m
>  Remaining Estimate: 47h 10m
>
> Issue:
> Problem exists when there are multiple HDFS writers in writersMap. Each writer keeps an open hdfs handle to the file. Incase of Inactive writer(i.e. one which is not consuming any data from long period), the files are not closed and always remain in open state.
> Ideally, these files should get closed and Hdfs writers removed from the WritersMap.
> Solution:
> Implement a ClosingFilesPolicy that is based on Tick tuple intervals. At each tick tuple all Writers are checked and closed if they exist for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)