You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Mahadev konar (JIRA)" <ji...@apache.org> on 2009/10/21 00:11:59 UTC

[jira] Updated: (HADOOP-6097) Multiple bugs w/ Hadoop archives

     [ https://issues.apache.org/jira/browse/HADOOP-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mahadev konar updated HADOOP-6097:
----------------------------------

    Attachment: HADOOP-6097-0.20.patch

patch with changes corresponding to MAPREDUCE-1010 for hadoop-0.20 branch.

> Multiple bugs w/ Hadoop archives
> --------------------------------
>
>                 Key: HADOOP-6097
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6097
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1, 0.19.2, 0.20.0, 0.20.1
>            Reporter: Ben Slusky
>            Assignee: Ben Slusky
>             Fix For: 0.20.2
>
>         Attachments: HADOOP-6097-0.20.patch, HADOOP-6097-0.20.patch, HADOOP-6097-0.20.patch, HADOOP-6097-v2.patch, HADOOP-6097.patch
>
>
> Found and fixed several bugs involving Hadoop archives:
> - In makeQualified(), the sloppy conversion from Path to URI and back mangles the path if it contains an escape-worthy character.
> - It's possible that fileStatusInIndex() may have to read more than one segment of the index. The LineReader and count of bytes read need to be reset for each block.
> - har:// connections cannot be indexed by (scheme, authority, username) -- the path is significant as well. Caching them in this way limits a hadoop client to opening one archive per filesystem. It seems to be safe not to cache them, since they wrap another connection that does the actual networking.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.