You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by GitBox <gi...@apache.org> on 2018/11/30 07:29:56 UTC

[GitHub] wohali closed pull request #38: local.ini sourced from tree on installation

wohali closed pull request #38: local.ini sourced from tree on installation
URL: https://github.com/apache/couchdb-pkg/pull/38
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/snap/BUILD.md b/snap/BUILD.md
new file mode 100644
index 0000000..694b0af
--- /dev/null
+++ b/snap/BUILD.md
@@ -0,0 +1,30 @@
+# Building snaps
+
+## Prerequisites
+
+CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful.
+
+1. `lxc launch ubuntu:16.04 couchdb-pkg`
+1. `lxc exec couchdb-pkg bash`
+1. `sudo apt update`
+1. `sudo apt install snapd snapcraft`
+
+1. `git clone https://github.com/couchdb/couchdb-pkg.git`
+1. `cd couchdb-pkg`
+
+## How to do it
+
+1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
+1. `snapcraft`
+
+## Instalation
+
+You may need to pull the LXD file to the host system.
+
+    $ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap /tmp/couchdb_2.2.0_amd64.snap
+
+The self crafted snap will need to be installed in devmode
+
+    $ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode
+
+
diff --git a/snap/HOWTO.md b/snap/HOWTO.md
index 21124a5..a217af9 100644
--- a/snap/HOWTO.md
+++ b/snap/HOWTO.md
@@ -1,109 +1,112 @@
 # HOW TO install a cluster using snap
 
-# Create three machines
-
-In the instruction below, we are going to set up a three -- the miniumn number needed to gain performace improvement -- Couch cluster database. In this potted example we will be using LXD.
-
-We launch a new container and install couchdb on one machine
-
-1. localhost> `lxc launch ubuntu:18.04 couchdb-c1`
-1. localhost> `lxc exec couchdb-c1 bash`
-1. couchdb-c1> `apt update`
-1. couchdb-c1> `snap install couchdb`
-1. couchdb-c1> `logout`
-
-Here we use LXD copy function to speed up the test
-```
-lxc copy couchdb-c1 couchdb-c2
-lxc copy couchdb-c1 couchdb-c3
-lxc copy couchdb-c1 cdb-backup
-lxc start couchdb-c2
-lxc start couchdb-c3
-lxc start cdb-backup
-```
-
-# Configure CouchDB (using the snap tool)
-
-We are going to need the IP addresses. You can find them here.
-```
-lxc list
-```
-
-Now lets use the snap configuration tool to set the configuration files.
-```
-lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.199 setcookie=monster admin=password bind-address=0.0.0.0
-lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.254 setcookie=monster admin=password bind-address=0.0.0.0
-lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.24 setcookie=monster admin=password bind-address=0.0.0.0
-```
-The backup machine we will leave as a single instance and no sharding. 
-```
-lxc exec cdb-backup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster admin=password bind-address=0.0.0.0 n=1 q=1
-```
-
-The snap must be restarted for the new configurations to take affect. 
-```
-lxc exec couchdb-c1 snap restart couchdb
-lxc exec couchdb-c2 snap restart couchdb
-lxc exec couchdb-c3 snap restart couchdb
-lxc exec cdb-backup snap restart couchdb
+## Create three nodes
+
+In the example below, we are going to set up a three node CouchDB cluster. (Three is the minimum number needed to support clustering features.) We'll also set up a separate, single machine for making backups. In this example we will be using LXD.
+
+We launch a (single) new container, install couchdb via snap from the store and enable interfaces, open up the bind address and set a admin password.
+```bash
+  1. localhost> lxc launch ubuntu:18.04 couchdb-c1
+  1. localhost> lxc exec couchdb-c1 bash
+  1. couchdb-c1> apt update
+  1. couchdb-c1> snap install couchdb --edge
+  1. couchdb-c1> snap connect couchdb:mount-observe
+  1. couchdb-c1> snap connect couchdb:process-control
+  1. couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/httpd/bind_address -d '"0.0.0.0"'
+  1. couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/admins/admin -d '"Be1stDB"'
+  1. couchdb-c1> exit
+```
+Back on localhost, we can then use the LXD copy function to speed up installation:
+```bash
+  $ lxc copy couchdb-c1 couchdb-c2
+  $ lxc copy couchdb-c1 couchdb-c3
+  $ lxc copy couchdb-c1 couchdb-bkup
+  $ lxc start couchdb-c2
+  $ lxc start couchdb-c3
+  $ lxc start couchdb-bkup
+```
+
+## Configure CouchDB using the snap tool
+
+We are going to need the IP addresses:
+```bash
+  $ lxc list
+```
+Now, again from localhost, and using the `lxc exec` commond, we will use the snap configuration tool to set the 
+various configuration files.
+```bash
+  $ lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.73 setcookie=monster
+  $ lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.221 setcookie=monster
+  $ lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.121 setcookie=monster
+```
+The backup machine we will configure as a single instance (n=1). 
+```bash
+  $ lxc exec couchdb-bkup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster
+  $ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/n -d '"1"'
+  $ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/q -d '"1"'
+
+```
+Each snap must be restarted for the new configurations to take affect. 
+```bash
+  $ lxc exec couchdb-c1 snap restart couchdb
+  $ lxc exec couchdb-c2 snap restart couchdb
+  $ lxc exec couchdb-c3 snap restart couchdb
+  $ lxc exec couchdb-bkup snap restart couchdb
 ```
 The configuration files are stored here.
-```
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/vm.args
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/*
+```bash
+  $ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/vm.args
 ```
 Any changes to couchdb from the http configutation tool are made here
-```
-lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/local.ini
+```bash
+  $ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/local.ini
 ```
 
-# Configure CouchDB Cluster (using the http interface)
+## Configure CouchDB Cluster (using the http interface)
 
-Now we set up the cluster via the http front-end. This only needs to be run once on the first machine. The last command syncs with the other nodes and creates the standard databases.
-```
-curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.254", "port": "5984", "username": "admin", "password":"password"}'
-curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.24", "port": "5984", "username": "admin", "password":"password"}'
-curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "finish_cluster"}'
+Now we set up the cluster via the http front-end. This only needs to be run once on the first machine. The last command 
+syncs with the other nodes and creates the standard databases.
+```bash
+  $ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.221", "port": "5984", "username": "admin", "password":"Be1stDB"}'
+  $ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.121", "port": "5984", "username": "admin", "password":"Be1stDB"}'
+  $ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "finish_cluster"}'
 ```
 Now we have a functioning three node cluster. 
 
-# An Example Database
+## An Example Database
 
 Let's create an example database ...
+```bash
+  $ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example
+  $ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
+  $ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aab -d '{"test":2}' -H "Content-Type: application/json"
+  $ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aac -d '{"test":3}' -H "Content-Type: application/json"
 ```
-curl -X PUT http://admin:password@10.210.199.199:5984/example
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aab -d '{"test":2}' -H "Content-Type: application/json"
-curl -X PUT http://admin:password@10.210.199.199:5984/example/aac -d '{"test":3}' -H "Content-Type: application/json"
-```
-... And see that it is sync'd accross the three nodes.
-```
-curl -X GET http://admin:password@10.210.199.199:5984/example/_all_docs
-curl -X GET http://admin:password@10.210.199.254:5984/example/_all_docs
-curl -X GET http://admin:password@10.210.199.24:5984/example/_all_docs
+... And see that it is created on all three nodes.
+```bash
+  $ curl -X GET http://admin:Be1stDB@10.210.199.73:5984/example/_all_docs
+  $ curl -X GET http://admin:Be1stDB@10.210.199.221:5984/example/_all_docs
+  $ curl -X GET http://admin:Be1stDB@10.210.199.121:5984/example/_all_docs
 ```
-# Backing Up CouchDB
+## Backing Up CouchDB
 
-Our back up server is on 10.210.199.242. We will manually replicate this from one (anyone) of the nodes.
+Our backup server is on 10.210.199.242. We will manually replicate to this from one (can be any one) of the nodes.
+```bash
+  $ curl -X POST http://admin:Be1stDB@10.210.199.242:5984/_replicate -d '{"source":"http://10.210.199.73:5984/example", "target":"example", "continuous":false,"create_target":true}' -H "Content-Type: application/json"
+  $ curl -X GET http://admin:Be1stDB@10.210.199.242:5984/example/_all_docs
 ```
-curl -X POST http://admin:password@10.210.199.242:5984/_replicate -d '{"source":"http://10.210.199.199:5984/example", "target":"example", "continuous":false,"create_target":true}' -H "Content-Type: application/json"
-curl -X GET http://admin:password@10.210.199.242:5984/example/_all_docs
+Whereas the data store for the clusters nodes is sharded:
+```bash
+  $ lxc exec couchdb-c1 ls /var/snap/couchdb/common/data/shards/
 ```
-The data store for the clusters nodes are sharded 
-```
-lxc exec couchdb-c1 ls /var/snap/couchdb/common/2.x/data/shards/
-```
-
-The backup database is a single file.
-```
-lxc exec cdb-backup ls /var/snap/couchdb/common/2.x/data/shards/00000000-ffffffff/
+The backup database is a single directory:
+```bash
+  $ lxc exec couchdb-bkup ls /var/snap/couchdb/common/data/shards/
 ```
 
-# Monitoring CouchDB 
-
-The logs, by default, are captured by journald
-```
-lxc exec couchdb-c1 bash
-journalctl -u snap.couchdb -f
-```
+## Monitoring CouchDB 
 
+The logs, by default, are captured by journald. First connect to the node in question:
+  `$ lxc exec couchdb-c1 bash`
+Then, show logs as usual. couchdb is likely prefixed with 'snap' and suffix may vary depending on the version of snap.
+  `$ journalctl -u snap.couchdb* -f`
diff --git a/snap/README.md b/snap/README.md
index 65ce54e..3be05d3 100644
--- a/snap/README.md
+++ b/snap/README.md
@@ -1,61 +1,70 @@
-# Building snaps
+# Snap Instalation
 
-## Prerequisites
+## Downloading from the snap store
 
-CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful. 
+The snap can be installed from a file or directly from the snap store. It is, for the moment, listed in the edge channel.
 
-1. `lxc launch ubuntu:16.04 couchdb-pkg`
-1. `lxc exec couchdb-pkg bash`
-1. `sudo apt update`
-1. `sudo apt install snapd snapcraft`
+```
+    $ sudo snap install couchdb --edge
+```  
+## Enable snap permissions
 
-1. `git clone https://github.com/couchdb/couchdb-pkg.git`
-1. `cd couchdb-pkg`
+The snap installation uses AppArmor to protect your system. CouchDB requests access to two interfaces: mount-observe, which
+is used by the disk compactor to know when to initiate a cleanup; and process-control, which is used by the indexer to set
+the priority of couchjs to 'nice'. These two interfaces, while not required, are useful. If they are not enabled, CouchDB will
+still run, but you will need to run the compactor manually and couchjs may put a heavy load on the system when indexing. 
 
-## How to do it
+To connect the interfaces type:
+   ```
+   $ sudo snap connect couchdb:mount-observe
+   $ sudo snap connect couchdb:process-control
+   ```
+## Snap configuration
 
-1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
-1. `snapcraft`
+There are two levels of hierarchy within couchdb configuration. 
 
-# Snap Instalation
+The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ the default.ini is
+first consulted and then any file default.d directory. In the snap installation 
+this is mounted read-only.
 
-You may need to pull the LXD file to the host system.
+The local layer is stored in /var/snap/couchdb/current/etc/ on the writable /var mount. 
+Within this second layer, configurations are set with-in local.ini or superseded by any 
+file within local.d. Configuration management tools (like puppet, chef, ansible, salt) operate here.
 
-    $ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap /tmp/couchdb_2.2.0_amd64.snap
+The name of the erlang process and the security cookie used is set within vm.args file.
+This can be set suing the snap native configuration. For example, when setting up 
+a cluster over several machines the convention is to set the erlang name to couchdb@your.ip.address. 
 
-The self crafted snap will need to be installed in devmode
+```
+    $ sudo snap set couchdb name=couchdb@216.3.128.12 setcookie=cutter
+```
 
-    $ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode 
+Snap Native Configuration changes only come into effect after a restart
 
-# Snap Configuration
+```
+    $ sudo snap restart couchdb
+```
 
-There are two levels of erlang and couchdb configuration hierarchy. 
+CouchDB options can be set via configuration over HTTP, as below.
 
-The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ and is read only. 
-The user override layer, is stored in /var/snap/couchdb/current/etc/ and is writable. 
-Within this second layer, configurations are set with the local.d directory (one file 
-per section) or the local.ini (co-mingled). The "snap set" command works with the 
-former (local.d) and couchdb http configuration overwrites the latter (local.ini). 
-Entries in local.ini supersede those in the local.d directory.
+```
+    $ curl -X PUT http://localhost:5984/_node/_local/_config/httpd/bind_address -d '"0.0.0.0"'
+    $ curl -X PUT http://localhost:5984/_node/_local/_config/couchdb/delayed-commits -d '"true"'
+```
 
-The name of the erlang process and the security cookie used is set in vm.args file.
-This can be set through the snap native configuration. For example, when setting up 
-a cluster over several machines the convention is to set the erlang 
-name to couchdb@your.ip.address. Both erlang and couchdb configuration changes can be 
-made at the same time.
+Changes here do not require a restart.
 
-    $ sudo snap set couchdb name=couchdb@216.3.128.12 setcookie=cutter admin=Be1stDB bind-address=0.0.0.0
+For anything else in vm.args or configuration not white listed over http, you can edit 
+the /var/snap/couchdb/current/etc files by hand and restart CouchDB. 
 
-Snap set variable can not contain underscore character, but any dashes are converted to underscore when
-writing to file. Wrap double quotes around any bracets and avoid spaces.
+## Example Cluster
 
-    $ sudo snap set couchdb delayed-commits=true erlang="{couch_native_process,start_link,[]}"
+See the [HOWTO][1] file to see an example of a three node cluster and further notes. 
 
-Snap Native Configuration changes only come into effect after a restart
-    
-    $ sudo snap restart couchdb
+## Building a Private Snap
 
-# Example Cluster
+If you want to build your own snap file from source see the [BUILD][2] for instructions.
 
-See the HOWTO.md file to see an example of a three node cluster.
+[1]: HOWTO.md
+[2]: BUILD.md
 
diff --git a/snap/meta/hooks/configure b/snap/meta/hooks/configure
index 8c2b1aa..d6af526 100755
--- a/snap/meta/hooks/configure
+++ b/snap/meta/hooks/configure
@@ -3,7 +3,6 @@
 set -e
 
 VM_ARGS=$SNAP_DATA/etc/vm.args
-LOCAL_DIR=$SNAP_DATA/etc/local.d
 
 
 ## add or replace for the vm.arg file
@@ -19,23 +18,6 @@ _modify_vm_args() {
   fi
 }
 
-_modify_ini_file() {
-  section=$1
-  opt=`echo $2 | tr "-" "_"`
-  value="$3"
-  config_file=${LOCAL_DIR}/${section}.ini
-  if [ ! -e ${config_file} ]; then
-    echo "[${section}]" > $config_file
-  fi
-  replace_line="$opt=$value"
-  if $(grep -q "^$opt=" $config_file); then
-    sed "s/^$opt=.*/$replace_line/" $config_file 2>/dev/null >${config_file}.new
-    mv -f ${config_file}.new ${config_file} 2>/dev/null
-  else
-    echo $replace_line >> $config_file
-  fi
-}
-
 # The vm_args file can only be changed from the filesystem
 # configutaion vm.args file
 
@@ -49,130 +31,3 @@ do
   fi
 done
 
-# The following list is either those fields that are whitelisted but 
-# useful to modifiy before first run; or those fields blacklisted
-# The snap set command modifies the files in local.d; any changes 
-# via the URL are reflected in local.ini
-
-# Special Cases
-
-# local.d/admins.ini
-passwd=$(snapctl get admin)
-if [ ! -z "$passwd" ]; then
-   _modify_ini_file admins admin $passwd
-   chmod 600 $SNAP_DATA/etc/local.d/admins.ini
-   sleep 0.125
-fi
-
-# local.d/ssl.ini
-port=$(snapctl get ssl-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file ssl port $port
-   sleep 0.125
-fi
-
-# local.d/httpd.ini
-port=$(snapctl get httpd-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file httpd port $port
-   sleep 0.125
-fi
-
-# local.d/chttpd.ini
-port=$(snapctl get chttpd-port)
-if [ ! -z "$port" ]; then
-   _modify_ini_file chttpd port $port
-   sleep 0.125
-fi
-
-# Generic Cases
-
-# local.d/chttpd.ini
-CHTTPD_OPTIONS="port bind-address require-valid-user"
-for key in $CHTTPD_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file chttpd $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/cluster.ini
-CLUSTER_OPTIONS="n q"
-for key in $CLUSTER_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file cluster $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/compaction_daemon.ini
-COMPACTION_DAEMON_OPTIONS="check-interval"
-for key in $COMPACTION_DAEMON_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file compaction_daemon $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/couchdb.ini
-COUCHDB_OPTIONS="database-dir view-index-dir delayed-commits"
-for key in $COUCHDB_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file couchdb $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/log.ini
-LOG_OPTIONS="writer file level"
-for key in $LOG_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file log $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/native_query_servers.ini
-NATIVE_QUERY_SERVERS_OPTIONS="query erlang"
-for key in $NATIVE_QUERY_SERVERS_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file native_query_servers $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/couch_peruser.ini
-COUCH_PERUSER_OPTIONS="database-prefix delete-dbs enable"
-for key in $COUCH_PERUSER_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file couch_peruser $key $val
-    sleep 0.125
-  fi
-done
-
-# local.d/uuids.ini
-UUIDS_OPTIONS="algorithm max-count"
-for key in $UUIDS_OPTIONS
-do
-  val=$(snapctl get $key)
-  if [ ! -z "$val" ]; then
-    _modify_ini_file uuids $key $val
-    sleep 0.125
-  fi
-done
-
-
diff --git a/snap/meta/hooks/install b/snap/meta/hooks/install
index eb7a541..0a2195c 100755
--- a/snap/meta/hooks/install
+++ b/snap/meta/hooks/install
@@ -2,8 +2,11 @@
 
 mkdir -p ${SNAP_DATA}/etc/local.d 
 
-cp ${SNAP}/rel/couchdb/etc/vm.args ${SNAP_DATA}/etc/vm.args
-
-cp ${SNAP}/rel/couchdb/etc/local.d/*.ini ${SNAP_DATA}/etc/local.d
+if [ ! -f ${SNAP_DATA}/etc/vm.args ]; then
+   cp ${SNAP}/rel/couchdb/etc/vm.args ${SNAP_DATA}/etc/vm.args
+fi
 
+if [ ! -f ${SNAP_DATA}/etc/local.ini ]; then
+   cp ${SNAP}/rel/couchdb/etc/local.ini ${SNAP_DATA}/etc/local.ini
+fi
 
diff --git a/snap/snap_run b/snap/snap_run
index 5fa783a..e608086 100755
--- a/snap/snap_run
+++ b/snap/snap_run
@@ -16,7 +16,7 @@
 
 export HOME=$SNAP_DATA
 export COUCHDB_ARGS_FILE=${SNAP_DATA}/etc/vm.args
-export ERL_FLAGS="-couch_ini ${SNAP}/rel/couchdb/etc/default.ini ${SNAP_DATA}/etc/local.d ${SNAP_DATA}/etc/local.ini"
+export ERL_FLAGS="-couch_ini ${SNAP}/rel/couchdb/etc/default.ini ${SNAP}/rel/couchdb/etc/default.d ${SNAP_DATA}/etc/local.ini ${SNAP_DATA}/etc/local.d"
 
 mkdir -p ${SNAP_DATA}/etc 
 
@@ -26,7 +26,6 @@ fi
 
 if [ ! -d ${SNAP_DATA}/etc/local.d ]; then
     mkdir ${SNAP_DATA}/etc/local.d
-    cp ${SNAP}/rel/couchdb/etc/local.d/*.ini ${SNAP_DATA}/etc/local.d
 fi
 
 if [ ! -e ${SNAP_DATA}/etc/local.ini ]; then
diff --git a/snap/snapcraft.yaml b/snap/snapcraft.yaml
index 8dd7dea..1eadf03 100644
--- a/snap/snapcraft.yaml
+++ b/snap/snapcraft.yaml
@@ -58,7 +58,7 @@ parts:
         plugin: dump
         source: ./snap/
         organize:
-            couchdb.ini: rel/couchdb/etc/local.d/couchdb.ini
+            couchdb.ini: rel/couchdb/etc/default.d/couchdb.ini
             snap_run: rel/couchdb/bin/snap_run
 
     packages:


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services