You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2009/06/11 22:00:08 UTC

[jira] Commented: (HADOOP-5635) distributed cache doesn't work with other distributed file systems

    [ https://issues.apache.org/jira/browse/HADOOP-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718604#action_12718604 ] 

Hudson commented on HADOOP-5635:
--------------------------------

Integrated in Hadoop-trunk #863 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/863/])
    

> distributed cache doesn't work with other distributed file systems
> ------------------------------------------------------------------
>
>                 Key: HADOOP-5635
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5635
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: filecache
>    Affects Versions: 0.20.0
>            Reporter: Andrew Hitchcock
>            Assignee: Andrew Hitchcock
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: fix-distributed-cache.patch, HADOOP-5635.patch
>
>
> Currently the DistributedCache does a check to see if the file to be included is an HDFS URI. If the URI isn't in HDFS, it returns the default filesystem. This prevents using other distributed file systems -- such as s3, s3n, or kfs  -- with distributed cache. When a user tries to use one of those filesystems, it reports an error that it can't find the path in HDFS.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.