Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated deployment #703

Merged
merged 20 commits into from
Feb 5, 2024
Merged

Automated deployment #703

merged 20 commits into from
Feb 5, 2024

Conversation

lentinj
Copy link
Collaborator

@lentinj lentinj commented Jan 19, 2024

Upgrade python/npm, automate much more of the deployment

@hyanwong 045217c it'd be useful to see what you think about. I'm presuming that is_testing iff rocket is a good assumption, but may not always be true under beta, e.g. It'd be easy to add a config override if that's the case.

@hyanwong
Copy link
Member

hyanwong commented Jan 19, 2024

Yeah, we set is_testing=True on beta, which can be useful at times.

We can't set it in appconfig.ini, because one of the major speedups from is_testing=False is that it only reads appconfig.ini once, at the start of firing up web2py.

In fact, this is one of the reasons for using is_testing=True on beta: so that we can make appconfig.ini changes and see their effect without restarting the server.

@lentinj
Copy link
Collaborator Author

lentinj commented Jan 20, 2024

Yeah, we set is_testing=True on beta, which can be useful at times.

Ideally IMO beta would be "as-live" as possible, just with a different server name. Setting is_testing here doesn't feel ideal.

But the above doesn't stop this, rather it'd be the beta server that needs to code altered, not production, which feels the right way round.

We can't set it in appconfig.ini, because one of the major speedups from is_testing=False is that it only reads appconfig.ini once, at the start of firing up web2py.

I was thinking of reloading the config if is_testing is set to get around this, but enabling/disabling would be a bit weird in retrospect (enabling would require a restart, disabling wouldn't).

Another option would be hooking into grunt dev vs grunt prod somehow, but I suspect that's getting carried away.

@davidebbo
Copy link
Collaborator

I don't know much about how these production branches are used, but why is this new one based on the old one that has a last commit of September 2022? Shouldn't it be rebased on the latest main?

@lentinj
Copy link
Collaborator Author

lentinj commented Jan 20, 2024

Hrm, good point.

We're in the progress of migrating onezoom.org, so I want to deploy the current live site to the new host, not an updated version based on main. The theory goes that the production branch is the state of the live server, which is why this is trying to merge to production.

Turns out it's not. Assuming I'm looking in the right place it seems to be a local commit off d70c6896. @hyanwong any reason not to update the production branch?

@lentinj
Copy link
Collaborator Author

lentinj commented Jan 22, 2024

We need to handle images.onezoom.org, although currently live/beta don't use it, maybe they should? Any thoughts on how this would be set up @hyanwong? This would be a good time to do them if so.

@hyanwong
Copy link
Member

hyanwong commented Feb 1, 2024

We need to handle images.onezoom.org, although currently live/beta don't use it, maybe they should?

The images.onezoom.org thing has been a useful redirect in the past. I've tried to keep stuff in the same filesystem. That means images.onezoom.org can point to a specific directory, and then both Beta and Prod can symlink to that directory, so they are all using the same thumbnails. I guess this would be tricky if these are all in their own jail.

When uploading new images by hand, it's easier for me to place them into one filesystem, shared by both Beta, Prod, and the images.onezoom.org redirect.

@hyanwong any reason not to update the production branch?

I think updating it should be fine. The main site is currently on the "renewals-temporary-fix" branch (with 2 additional hacks: the is_testing=False, and something to comment out sponsorship stories). The git log is below. I think we should just update production to reflect this.

$ git log
commit 47746f907551ff53b38f3c9063708754ffb0acd8 (HEAD -> renewals-temporary-fix)
Author: Yan Wong <[email protected]>
Date:   Sun Apr 23 18:22:42 2023 +0100

    Correct syntax for myconf.take
    
    And fix #662

commit 17c67ab71a7d5ac070e1e786c6d3d5b2273f8d9f
Author: Jamie Lentin <[email protected]>
Date:   Thu Mar 30 12:19:34 2023 +0100

    modules/sponsorship: Update test text post 84f7bb97

commit 548ecbce15f24fc2f89e148dd282efde91953857
Author: Yan Wong <[email protected]>
Date:   Sat Apr 1 19:13:38 2023 +0100

    Temp fix for #645
    
    Also add tests for `username` and lint the files

commit aae4d8d361a1c9b44b9ecbf0a12bf441337910da
Author: Yan Wong <[email protected]>
Date:   Sat Apr 1 14:44:16 2023 +0100

    Update sponsor_renew_reminder.txt

commit d70c68963c3a918719c15a075bd6478db1c63147
Author: James Rosindell <[email protected]>
Date:   Wed Mar 22 17:58:33 2023 +0000

    Update sponsor_renew.html
    
    Moves message about changes to sponsorship name and donor name to top of page and rephrases.

@hyanwong
Copy link
Member

hyanwong commented Feb 1, 2024

Basically, I wanted all instances to share the same set of thumbnail image directories. These would not be set up using automatic deployment, as collecting all the images would be impossible during deployment (many have, in fact, vanished from the Internet now).

@lentinj
Copy link
Collaborator Author

lentinj commented Feb 1, 2024

We need to handle images.onezoom.org, although currently live/beta don't use it, maybe they should?

The images.onezoom.org thing has been a useful redirect in the past. I've tried to keep stuff in the same filesystem. That means images.onezoom.org can point to a specific directory, and then both Beta and Prod can symlink to that directory, so they are all using the same thumbnails. I guess this would be tricky if these are all in their own jail.

True, the static/FinalOutputs/img symlinks wouldn't work across jails, but in development (and in the example config) we set url_base = //images.onezoom.org/. If this was set on beta & prod there'd be no reason for the symlinks, both sites would use the same image URLs from the get-go.

If we wanted to stop image-sharing, setting url_base = //imagesbeta.onezoom.org/ or disabling would do the trick.

There's potentially cross-origin problems to deal with, but if there were then we'd have noticed long ago when developing.

Another nice feature of images.onezoom.org is it allows us to have alternate hosting for the images in the future, throwing them through cloudflare or summat.

lentinj added a commit that referenced this pull request Feb 5, 2024
Merge renewals-temporary-fix branch, which has been the actual state of
production for a while.

References: #703
To run webpack on node 18 we need to set --openssl-legacy-provider
until we can upgrade webpack.
Automate housekeeping for web2py in a Grunt task
We need an up-do-date pymysql, get it by wrapping a venv around web2py

Add a web2py-run helper to use when running standalone scripts, that
will get the web2py setup right.
This will impact search performance, by forcing all search queries to
go through the unique index before searching.

Remove, replace with google analytics later.
Instead of having to tweak is_testing for production use, check the
request environment to see what server we're using.
Not that we should be using sessions much, if at all.
Scripts to configure nginx/supervisord
Update shebang at the top of web2py.py so it uses the venv by default.
Remove dependencies that are now part of tree-build, rework
instructions to take into account new Grunt rules.

Rework based on the
@lentinj
Copy link
Collaborator Author

lentinj commented Feb 5, 2024

@hyanwong I've had a bash at rewriting the README a bit to reflect currently reality. Mind having a read through and see if it makes sense? It's probably easier to look at https://github.com/OneZoom/OZtree/tree/production-next than the above commit.

Copy link
Collaborator

@davidebbo davidebbo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misc comments, some quite minor.

README.markdown Show resolved Hide resolved
README.markdown Outdated Show resolved Hide resolved
README.markdown Outdated Show resolved Hide resolved
README.markdown Show resolved Hide resolved
README.markdown Show resolved Hide resolved
README.markdown Show resolved Hide resolved
@davidebbo
Copy link
Collaborator

davidebbo commented Feb 5, 2024

I added various comments. But a higher level comment is that I feel this is missing information about the 'fullest' type of installation. i.e. there are sort of 3 levels:

  1. Partial, you only run the client files and you hit the live OZ server
  2. Run your own server:
    a. you fill in the DB by getting a dump from the docker image
    b. you build your own data, which is what the tree-build repo is all about

The readme covers 1. and 2.a., and it may make sense to mention 2.b. as well.

WDYT?

I should add that we should not block on this, as it can be expanded later, so we don't increase the scope of this PR.

@lentinj
Copy link
Collaborator Author

lentinj commented Feb 5, 2024

b. you build your own data, which is what the tree-build repo is all about

I have no idea how to do that, so haven't documented it :)

More seriously, ideally (2a) doesn't exist because tree-build is easy enough there's no reason to do anything else. Could we go as far as tree-build having a Github action generating artefacts that OZtree consumes?

I should add that we should not block on this, as it can be expanded later, so we don't increase the scope of this PR.

Yep, I'll do the trivial comments, but I think the rest should wait until I've rebased this onto main and can follow a normal development pattern.

Ideally this would be separated out into instructions for a production
instance, but at least not mentioning a temporary branch is good.
An empty installation doesn't have /var/db/acme/live/, only
/var/db/acme/.
Nginx falls over if v6 isn't available, which it isn't.
There is no /usr/local/etc/supervisord, it's all in /usr/local/etc.
It's comforting to see what it's doing, and makes error messages more
intelligable.
Move the production installation notes into their own section, mention
the install scripts to be run as root.
@lentinj lentinj marked this pull request as ready for review February 5, 2024 16:02
@lentinj
Copy link
Collaborator Author

lentinj commented Feb 5, 2024

Right, this is producing a working setup, so going to merge into production and cherry-pick onto main.

the README still isn't ideal, but it needs merge conflicts resolving onto main anyway, and that's the one anyone will see.

@lentinj lentinj merged commit 3da35ee into production Feb 5, 2024
lentinj added a commit that referenced this pull request Feb 5, 2024
There isn't a separate README_SERVER any more, the installation notes
are more choose-your-own-adventure and the gory details are mostly
automatic now.
lentinj added a commit that referenced this pull request Feb 5, 2024
* Link to MSI installer
* Suggest SQL workbench
@davidebbo
Copy link
Collaborator

More seriously, ideally (2a) doesn't exist because tree-build is easy enough there's no reason to do anything else. Could we go as far as tree-build having a Github action generating artefacts that OZtree consumes?

/cc @hyanwong. That's an interesting idea. One challenge is that tree-build requires some massive downloads that take hours (in particular a wikidata dump). That would make the build extremely heavy, unless the files are somehow cached on the build machine. Another angle is that some users may be using their own trees, in which case they'll inevitable need to build their own artifacts in tree-build and consume them in OZtree.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants