You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Tomer Shiran (JIRA)" <ji...@apache.org> on 2010/02/14 05:16:28 UTC

[jira] Commented: (HADOOP-5670) Hadoop configurations should be read from a distributed system

    [ https://issues.apache.org/jira/browse/HADOOP-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12833504#action_12833504 ] 

Tomer Shiran commented on HADOOP-5670:
--------------------------------------

Today, many users put the configuration files on an NFS server (e.g., NetApp) and all daemons and clients read the configuration from there. I can see two goals for this JIRA:
# Remove the external dependency. That is, allow people to deploy Hadoop without an NFS server.
# Remove the single point of failure. The NFS server might not have HA, in which case there is a single point of failure.

Are there other issues with the current NFS-based architecture? What are we trying to solve here?


> Hadoop configurations should be read from a distributed system
> --------------------------------------------------------------
>
>                 Key: HADOOP-5670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5670
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: conf
>            Reporter: Allen Wittenauer
>
> Rather than distributing the hadoop configuration files to every data node, compute node, etc, Hadoop should be able to read configuration information (dynamically!) from LDAP, ZooKeeper, whatever.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.