Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to bootstrap cluster from PVC/Volume #4073

Open
joryirving opened this issue Jan 17, 2025 · 0 comments
Open

Unable to bootstrap cluster from PVC/Volume #4073

joryirving opened this issue Jan 17, 2025 · 0 comments

Comments

@joryirving
Copy link

joryirving commented Jan 17, 2025

Overview

Hi team,

I'm trying to DR my postgres cluster. I currently backup to 3 repos:

  1. NFS via PVC
  2. R2
  3. Minio

I've repeatedly and successfully restored postgres clusters on a new k8s cluster via minio, however I'm running into a scenario where trying to do that via PVC/Volume fails, as it doesn't create the pod/PVC postgres-repo-host-0 until after the cluster is up and running. If I remove and recreate the cluster it works fine, but from a "blank" install of the operator it fails.

Environment

Kubernetes 1.32.1
Bare metal install (Talos Linux)
PGO ubi8-5..7.2-0
Postgres ubi-16.6-1 (16)
Storage local-hostpath (openebs)

Steps to Reproduce

Install PGO operator from scratch.
Create postgres cluster using dataSource.pgbackrest.repo.volume for the first time.
Pod fails to find data to restore from.
It appears to a condition where the PVC/Volume isn't created until after the cluster is successfully running.

EXPECTED

I'm able to successfully bootstrap a new cluster from a backup on an NFS system.

ACTUAL

The cluster hangs and is unable to bootstrap.

Logs

N/A, as I worked around it by bootstrapping from S3 to reduce downtime.

Additional Information

This is an example of my postgrescluster that tried (and failed) to restore from PVC
https://github.com/joryirving/home-ops/blob/9614dc3d6bab8a53ddf7344890765e4f057c7827/kubernetes/main/apps/database/crunchy-postgres/cluster/cluster.yaml

Specifically here:

      repos:
        - name: repo1
          volume: &nfs
            volumeClaimSpec:
              storageClassName: nfs-slow #csi-driver-nfs
              volumeName: postgres-nfs
              accessModes: ["ReadWriteMany"]
              resources:
                requests:
                  storage: 1Mi
          schedules:
            full: "30 1 * * 0" # Sunday at 01:30
            differential: "30 1 * * 1-6" # Mon-Sat at 01:30
            incremental: "30 3-23 * * *" # Every hour except 01:30-2:30
        - name: repo2
          s3: &r2
            bucket: crunchy-pgo
            endpoint: ${R2_ENDPOINT}
            region: us-east-1 #https://developers.cloudflare.com/r2/api/s3/api/#bucket-region
          schedules:
            full: "30 2 * * 0" # Sunday at 02:30
            incremental: "30 2 * * 1-6/2" # Mon-Sat at 02:30, every 2nd day
        # - name: repo3
        #   s3: &minio
        #     bucket: postgresql
        #     endpoint: s3.jory.dev
        #     region: ca-west-1
        #   schedules:
        #     full: "15 1 * * 0" # Sunday at 01:15
        #     differential: "15 1 * * 1-6" # Mon-Sat at 01:15
        #     incremental: "15 3-23 * * *" # Every hour except 01:30-2:30
  dataSource:
    pgbackrest:
      stanza: db
      configuration: *backupConfig
      global: *backupFlag
      repo:
        name: repo1
        volume: *nfs
        # s3: *r2

I'm manually creating the PV for the PVC to bind to here:
https://github.com/joryirving/home-ops/blob/9614dc3d6bab8a53ddf7344890765e4f057c7827/kubernetes/main/apps/database/crunchy-postgres/cluster/nfs-pvc.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant