Having issues? This wiki page might help.
Make sure Whitenoise v6.8 or higher is installed in your virtual environment and defined in your requirements.txt file. v6.7 does not serve static files at all (see issue).
pip uninstall whitenoise
pip install whitenoise~=6.8
# requirements.txt
# replace existing whitenoise line with this
whitenoise~=6.8
In your 'settings.py' file, add these lines in the specified locations.
Warning
Make sure you copy this stuff exactly how it is. MultiHost expects these fields to be set as they are below. If you ever get an error about unknown/missing paths, invalid url configs, database stuff, or missing static files, check here.
- Add these imports
# import os
from environs import Env
- Initialize the
Env()
object
env = Env()
env.read_env()
- Place this after BASE_DIR is set
# Make sure you've initialized the `env` object!
FORCE_SCRIPT_NAME = (
'/' + env.str('SITE_NAME', default=None) # if the SITE_NAME env variable is set
if not env.str('SITE_NAME', default=None) # if not
else '' # set to nothing to let Django take over
)
- Set DEBUG to False; this is a "production" environment, and we don't want our precious insider knowledge leaking out, now do we?
DEBUG = env.bool(
'DEBUG',
default=False # or True, to fail unsafely
)
- Modify existing elements in these two lists
ALLOWED_HOSTS = [
'localhost',
'127.0.0.1',
'csci258.cs.umt.edu', # this is the url of the VM
]
INTERNAL_IPS = [
'127.0.0.1',
'localhost',
]
- Put these two middlewares in this order
MIDDLEWARE = [
# ...
'django.middleware.security.SecurityMiddleware', # already exists
'whitenoise.middleware.WhiteNoiseMiddleware', # this *must* go right after the one above
# ...
]
- Replace your existing database configs with this (unless you have a third one used for a specific test case, then just leave that)
DATABASES = {
# PostgreSQL database used in production
'prod': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': env.str('POSTGRES_DB', default=None),
'USER': env.str('POSTGRES_USER', default=None),
'PASSWORD': env.str('POSTGRES_PASSWORD', default=None),
'HOST': 'postgres',
'PORT': '5432',
},
# local SQLite database used for development and testing
'local': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
# any other configs would go down here
}
# defaults to local if not set in environment variable
# environment variable is set by the Docker config
default_database = env.str('DJANGO_DATABASE', default='local')
# sets detected database to default
DATABASES['default'] = DATABASES[default_database]
- Replace your static files settings with exactly this
# URL path to serve static files from; ex: '/group1/static/'
STATIC_URL = FORCE_SCRIPT_NAME + '/static/'
# project static files location
STATICFILES_DIRS = [ BASE_DIR / "static" ]
# collected static files location; includes other apps, like admin
STATIC_ROOT = BASE_DIR / 'staticfiles'
# enable caching and compression when serving static files
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
You'll need a way to store some secrets that shouldn't be committed to Git. While this is by no means the only way, this is the way I would start with.
- In your 'settings.py', change the Django SECRET_KEY so it loads from a secrets file. Here's an example:
# load the key; use default if key var not set
SECRET_KEY = env.str('SECRET_KEY', default='django-insecure-4$6@5&r4%kex2%me935-8q^=ep=ufnyv89&i7@dx^68924o2q#')
Don't worry about generating this key manually, as deploy prep
generates it and adds it to '.env' for you.
More examples can be found on the Wiki.
- Use SSH (Secure Shell) to open a remote terminal into the class VM. Like above, replace
username
with the username given to you.
- Use
cd
to navigate to your group's directory. Like above, replaceGROUP_NAME
with your group's actual name.
cd /django/TERM/GROUP_NAME
# example:
cd /django/fall24/group1
- Before doing anything else, Run the following command to set up your group's environment. It'll walk you through a couple steps.
deploy prep
Thankfully, deployment is super simple. I've made a helpful script, deploy
, to simplify dealing with Docker Compose. Once you and your group have done everything above, run:
deploy start site
To see its status, run:
deploy status site
This displays the standard Docker Compose output style, as that's what's running under the hood. In fact, the command being run is:
# the last command under the hood
docker compose -f docker-compose.site.yml ps
To take down your site, run:
deploy stop site
Generally, commands can be run in either the Gunicorn or PostgreSQL Docker containers with the following syntax:
# under the hood
docker compose -f docker-compose.site.yml exec (container) (command) [args...]
However, with deploy
, it's a bit simpler:
deploy exec site (container) (command) [args...]
Because your Django site is being served by a Docker container running the Gunicorn WSGI server, running commands with 'manage.py' is a little more complex. Without deploy
, you would have to run:
# under the hood
docker compose -f docker-compose.site.yml exec gunicorn python manage.py (command) [args...]
...which is gross. Instead, you can use 'manage.py' like so:
deploy manage [commands...]
In this environment, it would likely only be used to create the superuser, as that is tied to the database, and the production database is different from the one used for local testing.
Note
Fun fact: The all-important deploy
command is just another Python script. If you run the command "which deploy
" in the terminal, you'll see it's located at '/usr/local/bin/deploy', which is a symlink (kinda like a Windows short cut, just like a macOS alias) to '/django/source/deploy.py'.