You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Daniel Templeton (JIRA)" <ji...@apache.org> on 2019/03/19 15:38:00 UTC

[jira] [Created] (HDFS-14381) Add option to hdfs dfs -cat to ignore corrupt blocks

Daniel Templeton created HDFS-14381:
---------------------------------------

             Summary: Add option to hdfs dfs -cat to ignore corrupt blocks
                 Key: HDFS-14381
                 URL: https://issues.apache.org/jira/browse/HDFS-14381
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: tools
    Affects Versions: 3.2.0
            Reporter: Daniel Templeton


If I have a file in HDFS that contains 100 blocks, and I happen to lose the first block (for whatever obscure/unlikely/dumb reason), I can no longer access the 99% of the file that's still there and accessible.  In the case of some data formats (e.g. text), the remaining data may still be useful.  It would be nice to have a way to extract the remaining data without having to manually reassemble the file contents from the block files.  Something like {{hdfs dfs -cat -ignoreCorrupt <file>}}.  It could insert some marker to show where the missing blocks are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org