You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@couchdb.apache.org by "Javier Candeira (JIRA)" <ji...@apache.org> on 2014/11/17 23:10:34 UTC

[jira] [Commented] (COUCHDB-2390) Fauxton config, admin sections considered dangerous in 2.0

    [ https://issues.apache.org/jira/browse/COUCHDB-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215278#comment-14215278 ] 

Javier Candeira commented on COUCHDB-2390:
------------------------------------------

> Since we're going to drop Admin Party, we still have to provide some console tool to setup first admin

I'm working on a couchpasswd tool to edit hashed passwords into the config file as part of 
https://issues.apache.org/jira/browse/COUCHDB-2367

> As backward compatible feature, it could read admins from ini, but never write them back with hash.

My opinion is that anyone setting up a cluster should copy the same config file (or [admins] section) to all nodes. Anyone half serious should be using some kind of puppet/chef/ansible tool for that anyway, and any learning hobbyist can very well go in and edit the file themselves by hand. They need to set the same erlang cookie anyway, so why not the [admins] section too.

The reason is that I see a source of conflicts in cluster nodes reading admins from ini, but not writing them back. 

1. Admin Arms sets hashed admin passwords for all admins in the nodes. They are the same for every node/admin pair.
2. Nodes read hashed admin passwords into _authdb.
3. Admin Boo changes their own admin passwords in _authdb. 
4. Admin Chris is new, and they modify the config file only in one of the nodes, for themselves and for admin Dan (Dan asked Chris to take Dan's hashed password from another setup). Does this action change both their passwords in _authdb?. Should _authdb replication check that the config was the same in all three nodes? 
5. Whatever the case, admin Chris finally changes the config in all three nodes.
6. Admin Dan then attempts to change their own admin password in authdb. Should he succeed? What password should they use, the one Arms set on 1, or the one Chris set on 5?
7. The cluster is restored from backups, and as part of the restore, admin Ems uses the old config, with the old [admins] section in the config, except that she first puts in a new password for herself, and puts it in the local.ini of all four nodes before bringing the nodes up. 

What is the expected behaviour now? Does Em have admin access, because the new config overrrode the authdb setup on startup? Or does the restored-from-backup authdb override the [admins] section in the config? Do Boo and Dan still have access with their passwords they changed in authdb? Does Chris use their old password that Arms set in 1, or the new one they set themselves in 4-5? 

I guess the question is, in the presence of authdb, is the [admins] section in a config file just another way for inputting an admin's passed hashword into the database? Does the cluster replicate the new admin passwds to all nodes? Going forward, what good is a config file that's out of sync then?


> Fauxton config, admin sections considered dangerous in 2.0
> ----------------------------------------------------------
>
>                 Key: COUCHDB-2390
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-2390
>             Project: CouchDB
>          Issue Type: Bug
>      Security Level: public(Regular issues) 
>          Components: BigCouch, Fauxton
>            Reporter: Joan Touzet
>            Priority: Blocker
>
> In Fauxton today, there is are 2 sections to edit config-file settings and to create new admins. Neither of these sections will work as intended in a clustered setup.
> Any Fauxton session will necessarily be speaking to a single machine. The config APIs and admin user info as exposed will only add that information to a single node's .ini file.
> We should hide these features in Fauxton for now (short-term fix) and correct the config /admin creation APIs to work correctly in a clustered setup (medium-term fix).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)