Solves the problem of maintaining configuration volumes in the Docker world.
It allows to have a container (in the same POD for Kubernetes or as a sidekick in Rancher) maintaining a few configuration/resource directories from different sources:
- A GIT repository
- A S3 bucket
- Another volume or server through rsync
A source can be refreshed through a webhook and the status of all the containers can be queried through a simple API.
First you need to configure the source for the master configuration using the
MASTER_CONFIG
environment variable with something like:
env:
MASTER_CONFIG: |
type: git
repo: [email protected]:camptocamp/master_config.git
That will use the shared_config_manager.yaml
file at the root of the repository
to configure all the sources. This configuration file looks like that:
sources:
test_git:
type: git
repo: [email protected]:camptocamp/test_git.git
target_dir: /usr/local/tomcat/webapps/ROOT/print-apps
With this example, the config container will contain a /usr/local/tomcat/webapps/ROOT/print-apps
directory that is a clone of the [email protected]:camptocamp/test_git.git
repository and is identified as
test_git
.
You can configure more than one source.
There is a standalone mode where you can configure only one source through the MASTER_CONFIG
environment variable. For that, set the standalone
key to true
in the MASTER_CONFIG
and no
shared_config_manager.yaml
will be searched.
There is also a mode where you can put directly the sources in the MASTER_CONFIG
by setting the sources
section directly at the root of the MASTER_CONFIG
.
A few environment variables can be used to tune the containers:
C2C_REDIS_URL
: Must point to a running Redis (typical:redis://redis:6379
) for being able to broadcast the refresh notificationsMASTER_CONFIG
: The master configuration (string containing the YAML config)ROUTE_PREFIX
: The prefix to use for the HTTP API (defaults to/scm
)TAG_FILTER
: Load only the sources having the given tag (the master config is always loaded)TARGET
: default base directory for thetarget_dir
configuration (defaults to/config
)MASTER_TARGET
: where to store the master config (defaults to/master_config
)API_BASE_URL
: how to reach the master for slaves.API_MASTER
: if defined, this is a master with slaves (no template evaluation)SCM_SECRET
: the secret used to authenticate the request between the client and the server
See [https://github.com/camptocamp/c2cwsgiutils] for other parameters.
type
: the type of sourcetarget_dir
: the location where the source will be copied (default is the value ofid
in/config
)excludes
: the list of files/directories to excludetemplate_engines
: a list of template engine configurationstags
: an optional list of tags. Slaves havingTAG_FILTER
defined will load only sources having the matching tag.
type
:git
ssh_key
: the private SSH key to use as identity (optional)repo
: the GIT repository URLbranch
: the GIT branch to use (defaults tomaster
)sub_dir
: if only a sub_directory of the repository needs to be copied (defaults to the root of the repository)sparse
: if true (the default) andsub_dir
is defined, will use a sparse clone. Disable that if you have multiple sources using the same repository (will avoid cloning it for each source).
type
:rsync
source
: the source for the rsync commandssh_key
: the private SSH key to use as identity (optional)
type
:rclone
config
: The content of the rclone configuration section to use.sub_dir
: An optional sub-directory inside the remote (bucket + path for the S3 remotes)
type
: can bemako
orshell
data
: a dictionary of key/value to pass as a parameter to the template engineenvironment_variables
: Iftrue
, take into account the process's environment variables if not found indata
. Only variables starting with a prefix listed inSCM_ENV_PREFIXES
(list separated by:
) are allowed.dest_sub_dir
: If specified, all the files, including the ones not evaluated as templates will be copied into the given sub directory.
By default the image starts a WSGI server listening on port 8080. In big deployments a full WSGI server
could use a sizeable amount of RAM. So you could have only a couple of such containers and the rest running
as slaves. For that, change the command run by the container to shared-config-slave
.
docker-compose.yaml:
version: '2'
services:
scm_api:
image: camptocamp/shared_config_manager:latest
environment: &scm_env
C2C_REDIS_URL: redis://redis:6379
MASTER_CONFIG: &master_config |
type: git
repo: [email protected]:camptocamp/master_config.git
SCM_SECRET: changeme
TAG_FILTER: master
links:
- redis
labels:
io.rancher.container.hostname_override: container_name
lb.routing_key: scm-${ENVIRONMENT_NAME}
lb.haproxy_backend.timeout_server: 'timeout server 120s'
lb.haproxy_backend.maxconn: default-server maxconn 1
scm_slave:
image: camptocamp/shared_config_manager:latest
environment: *scm_env
command: ['shared-config-slave']
volumes:
- /usr/local/tomcat/webapps/ROOT/print-apps
links:
- redis
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.global: 'true'
print:
image: camptocamp/mapfish_print:3.14.1
labels:
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.global: 'true'
io.rancher.sidekicks: scm_slave
volumes_from:
- scm_slave
redis:
labels:
io.rancher.container.hostname_override: container_name
image: redis:4
mem_limit: 64m
command: redis-server --save "" --appendonly no
user: www-data
rancher-compose.yaml:
version: '2'
services:
scm_api:
scale: 1
health_check:
port: 8080
interval: 10000
unhealthy_threshold: 3
request_line: GET /scm/c2c/health_check HTTP/1.0
healthy_threshold: 1
response_timeout: 10000
strategy: recreate
scm_slave: {}
redis:
scale: 1
health_check:
port: 6379
interval: 2000
initializing_timeout: 60000
unhealthy_threshold: 3
strategy: recreate
healthy_threshold: 1
response_timeout: 3000
Look there: [https://github.com/camptocamp/private-geo-charts/tree/master/mutualized-print]
GET {ROUTE_PREFIX}/1/refresh/{ID}
Refresh the given source {ID}
. Returns 200 in case of success. The actual work is done asynchronously.
To refresh the master configuration (list of sources), use master
as ID.
POST {ROUTE_PREFIX}/1/refresh/{ID}
Same as the GET API, but to be used with a GutHub/GitLab webhook for push events. Will ignore events for other branches.
GET {ROUTE_PREFIX}/1/refresh
Refresh all sources. Returns 200 in case of success. The actual work is done asynchronously.
The master configuration is not refreshed.
POST {ROUTE_PREFIX}/1/refresh
Same as the GET API, but to be used with a GutHub/GitLab webhook for push events. Will ignore events for other branches.
GET {ROUTE_PREFIX}/1/status
Returns the glable status, looking like that:
{
"slaves": {
"api": {
"sources": {
"master": {
"hash": "240930ea8580d8392544bc0f42bdd1720b772a46",
"repo": "/repos/master",
"type": "git"
},
"test_git": {
"hash": "4e066840860d77b143cbecbb8d23db3b755980b2",
"repo": "/repos/test_git",
"template_engines": [
{
"environment_variables": { "TEST_ENV": "42" },
"type": "shell"
}
],
"type": "git"
}
}
},
"slave": {
"sources": {
"master": {
"hash": "240930ea8580d8392544bc0f42bdd1720b772a46",
"repo": "/repos/master",
"type": "git"
},
"test_git": {
"hash": "4e066840860d77b143cbecbb8d23db3b755980b2",
"repo": "/repos/test_git",
"template_engines": [
{
"environment_variables": { "TEST_ENV": "42" },
"type": "shell"
}
],
"type": "git"
}
}
}
}
}
GET {ROUTE_PREFIX}/1/status/{ID}
Returns the status for the given source ID, looking like that:
{
"statuses": [
{
"hash": "4e066840860d77b143cbecbb8d23db3b755980b2",
"repo": "/repos/test_git",
"template_engines": [
{
"environment_variables": { "TEST_ENV": "42" },
"type": "shell"
}
],
"type": "git"
}
]
}
GET {ROUTE_PREFIX}/1/tarball/{ID}
Returns a .tar.gz
containing the current content for the given source.
Install the pre-commit hooks:
pip install pre-commit
pre-commit install --allow-missing-config