You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/02/21 23:44:29 UTC

[jira] Updated: (HADOOP-51) per-file replication counts

     [ http://issues.apache.org/jira/browse/HADOOP-51?page=all ]

Doug Cutting updated HADOOP-51:
-------------------------------

    Assign To: Sameer Paranjpye

> per-file replication counts
> ---------------------------
>
>          Key: HADOOP-51
>          URL: http://issues.apache.org/jira/browse/HADOOP-51
>      Project: Hadoop
>         Type: New Feature
>   Components: dfs
>     Versions: 0.1
>     Reporter: Doug Cutting
>     Assignee: Sameer Paranjpye
>      Fix For: 0.1

>
> It should be possible to specify different replication counts for different files.  Perhaps an option when creating a new file should be the desired replication count.  MapReduce should take advantage of this feature so that job.xml and job.jar files, which are frequently accessed by lots of machines, are more highly replicated than large data files.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira