You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2007/09/27 23:57:50 UTC

[jira] Resolved: (HADOOP-518) hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad checksum

     [ https://issues.apache.org/jira/browse/HADOOP-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley resolved HADOOP-518.
----------------------------------

    Resolution: Duplicate

This was fixed by HADOOP-1134 (block crcs).

> hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad checksum
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-518
>                 URL: https://issues.apache.org/jira/browse/HADOOP-518
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>         Environment: red hat
>            Reporter: Dick King
>            Assignee: Sameer Paranjpye
>
> I have a file that reliably generates a checksum error when it's read, whether by a map/reduce job as input or by a "dfs -get" command.
> However...
> if I do a "dfs -cp" from the file with the bad checksum the copy can be read in its entirety without a checksum error.
> I would consider it reasonable for the command to fail, or for the new file to be created but to also have a checksum error in the same place, but this behavior is unsettling.
> -dk

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.