-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated deployment #703
Automated deployment #703
Conversation
Yeah, we set is_testing=True on beta, which can be useful at times. We can't set it in appconfig.ini, because one of the major speedups from In fact, this is one of the reasons for using is_testing=True on beta: so that we can make appconfig.ini changes and see their effect without restarting the server. |
Ideally IMO beta would be "as-live" as possible, just with a different server name. Setting But the above doesn't stop this, rather it'd be the beta server that needs to code altered, not production, which feels the right way round.
I was thinking of reloading the config if is_testing is set to get around this, but enabling/disabling would be a bit weird in retrospect (enabling would require a restart, disabling wouldn't). Another option would be hooking into |
6c6cf06
to
b09ac41
Compare
I don't know much about how these production branches are used, but why is this new one based on the old one that has a last commit of September 2022? Shouldn't it be rebased on the latest main? |
Hrm, good point. We're in the progress of migrating onezoom.org, so I want to deploy the current live site to the new host, not an updated version based on Turns out it's not. Assuming I'm looking in the right place it seems to be a local commit off d70c6896. @hyanwong any reason not to update the production branch? |
We need to handle images.onezoom.org, although currently live/beta don't use it, maybe they should? Any thoughts on how this would be set up @hyanwong? This would be a good time to do them if so. |
The images.onezoom.org thing has been a useful redirect in the past. I've tried to keep stuff in the same filesystem. That means When uploading new images by hand, it's easier for me to place them into one filesystem, shared by both Beta, Prod, and the images.onezoom.org redirect.
I think updating it should be fine. The main site is currently on the "renewals-temporary-fix" branch (with 2 additional hacks: the is_testing=False, and something to comment out sponsorship stories). The git log is below. I think we should just update production to reflect this.
|
Basically, I wanted all instances to share the same set of thumbnail image directories. These would not be set up using automatic deployment, as collecting all the images would be impossible during deployment (many have, in fact, vanished from the Internet now). |
True, the If we wanted to stop image-sharing, setting There's potentially cross-origin problems to deal with, but if there were then we'd have noticed long ago when developing. Another nice feature of |
Merge renewals-temporary-fix branch, which has been the actual state of production for a while. References: #703
To run webpack on node 18 we need to set --openssl-legacy-provider until we can upgrade webpack.
Automate housekeeping for web2py in a Grunt task
We need an up-do-date pymysql, get it by wrapping a venv around web2py Add a web2py-run helper to use when running standalone scripts, that will get the web2py setup right.
This will impact search performance, by forcing all search queries to go through the unique index before searching. Remove, replace with google analytics later.
Instead of having to tweak is_testing for production use, check the request environment to see what server we're using.
Not that we should be using sessions much, if at all.
Scripts to configure nginx/supervisord
Update shebang at the top of web2py.py so it uses the venv by default.
b09ac41
to
c7be157
Compare
Remove dependencies that are now part of tree-build, rework instructions to take into account new Grunt rules. Rework based on the
@hyanwong I've had a bash at rewriting the README a bit to reflect currently reality. Mind having a read through and see if it makes sense? It's probably easier to look at https://github.com/OneZoom/OZtree/tree/production-next than the above commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misc comments, some quite minor.
I added various comments. But a higher level comment is that I feel this is missing information about the 'fullest' type of installation. i.e. there are sort of 3 levels:
The readme covers 1. and 2.a., and it may make sense to mention 2.b. as well. WDYT? I should add that we should not block on this, as it can be expanded later, so we don't increase the scope of this PR. |
I have no idea how to do that, so haven't documented it :) More seriously, ideally (2a) doesn't exist because tree-build is easy enough there's no reason to do anything else. Could we go as far as tree-build having a Github action generating artefacts that OZtree consumes?
Yep, I'll do the trivial comments, but I think the rest should wait until I've rebased this onto main and can follow a normal development pattern. |
Ideally this would be separated out into instructions for a production instance, but at least not mentioning a temporary branch is good.
An empty installation doesn't have /var/db/acme/live/, only /var/db/acme/.
Nginx falls over if v6 isn't available, which it isn't.
There is no /usr/local/etc/supervisord, it's all in /usr/local/etc.
It's comforting to see what it's doing, and makes error messages more intelligable.
Move the production installation notes into their own section, mention the install scripts to be run as root.
Right, this is producing a working setup, so going to merge into the README still isn't ideal, but it needs merge conflicts resolving onto main anyway, and that's the one anyone will see. |
There isn't a separate README_SERVER any more, the installation notes are more choose-your-own-adventure and the gory details are mostly automatic now.
* Link to MSI installer * Suggest SQL workbench
/cc @hyanwong. That's an interesting idea. One challenge is that tree-build requires some massive downloads that take hours (in particular a wikidata dump). That would make the build extremely heavy, unless the files are somehow cached on the build machine. Another angle is that some users may be using their own trees, in which case they'll inevitable need to build their own artifacts in tree-build and consume them in OZtree. |
Upgrade python/npm, automate much more of the deployment
@hyanwong 045217c it'd be useful to see what you think about. I'm presuming that is_testing iff rocket is a good assumption, but may not always be true under beta, e.g. It'd be easy to add a config override if that's the case.