This is a community-maintained Helm chart for deploying Saleor, a modular, high performance, headless e-commerce platform built with Python, GraphQL, Django, and React. This chart is not officially associated with or maintained by the Saleor team.
- Full Saleor stack deployment (API, Dashboard, Worker)
- Production-grade PostgreSQL configuration with read replica support
- Redis for caching and Celery tasks
- Service mesh integration for improved performance
- Horizontal Pod Autoscaling
- Configurable resource management
- Comprehensive security settings
- Kubernetes 1.19+
- Helm 3.2.0+
- PV provisioner support in the underlying infrastructure
- Istio (optional, for service mesh features)
- Add the Helm repository:
helm repo add trieb-work https://trieb-work.github.io/helm-charts
helm repo update
- Install the chart:
helm install my-saleor trieb-work/saleor
The chart deploys the following components:
- Saleor API server
- Saleor Dashboard
- Celery workers
- PostgreSQL database (optional)
- Redis (optional)
- Service mesh configuration (optional)
global:
# -- Global image pull secrets
imagePullSecrets: []
# -- Storage class to use for persistent volumes
storageClass: ""
# -- External Database URL (if not using internal PostgreSQL)
databaseUrl: ""
# -- External Redis URL (if not using internal Redis)
redisUrl: ""
# -- RSA private key for JWT signing
jwtRsaPrivateKey: ""
database:
primaryUrl: "" # External primary database URL
replicaUrls: [] # External read replica URLs
maxConnections: 150
connectionTimeout: 5
Saleor uses RSA private/public key pairs for JWT token signing. You must configure this for production deployments. Here's how to set it up:
-
Generate a new RSA key pair (if you don't have one):
# Generate private key openssl genrsa -out private.pem 4096 # Generate public key openssl rsa -in private.pem -pubout -out public.pem
-
Add the private key to your values file:
global: jwtRsaPrivateKey: | -----BEGIN PRIVATE KEY----- Your RSA private key here -----END PRIVATE KEY-----
The private key will be automatically mounted in both the API and worker services. Keep your private key secure and never commit it to version control.
Note: If jwtRsaPrivateKey
is not set, Saleor will use a temporary key in development mode, but this is not suitable for production use.
postgresql:
enabled: true
architecture: replication # standalone or replication
auth:
username: saleor
database: saleor
existingSecret: ""
primary:
persistence:
size: 50Gi
resources:
requests:
cpu: 2
memory: 4Gi
readReplicas:
replicaCount: 1
persistence:
size: 50Gi
postgresql:
enabled: false
global:
database:
primaryUrl: "postgresql://user:pass@primary-db:5432/saleor"
replicaUrls:
- "postgresql://user:pass@replica1-db:5432/saleor"
The chart offers several options for configuring Redis:
By default, the chart will deploy a Redis instance using the Bitnami Redis chart:
redis:
enabled: true
architecture: standalone
auth:
enabled: true
# Optional: Provide a specific password
password: "your-password" # If not set, a random password will be generated
master:
persistence:
size: 8Gi
resources:
requests:
cpu: 100m
memory: 128Mi
The Redis password is stored in a Kubernetes secret and will be preserved across Helm upgrades.
To use an external Redis instance, disable the built-in Redis and configure the external connection:
redis:
enabled: false
external:
host: "my-redis.example.com"
port: 6379
database: 0
username: "redis-user" # Optional, for Redis ACLs
password: "redis-password"
tls:
enabled: false # Set to true for TLS/SSL connections
For complete control over the Redis URL, you can provide it directly:
global:
redisUrl: "redis://user:[email protected]:6379/0"
# Or with TLS:
# redisUrl: "rediss://user:[email protected]:6379/0"
- If using built-in Redis without a specified password, a random one will be generated during first installation
- The generated password will be preserved across Helm upgrades
- Redis URL format:
redis[s]://[username][:password]@host:port/database
- Use
redis://
for standard connections - Use
rediss://
for TLS/SSL connections - Username is optional and only needed for Redis ACLs
- Database number is optional (defaults to 0)
- Use
- When using external Redis with TLS:
- Set
redis.external.tls.enabled: true
- The connection will use the
rediss://
protocol - You can optionally skip TLS verification with
redis.external.tls.insecureSkipVerify: true
- Set
The chart offers several options for configuring the database:
By default, the chart will deploy a PostgreSQL instance and automatically manage the credentials:
postgresql:
enabled: true
architecture: standalone # or 'replication' for primary-replica setup
auth:
username: saleor
database: saleor
# Optional: Provide a specific password
password: "your-password" # If not set, a random password will be generated
The PostgreSQL credentials are stored in a Kubernetes secret named postgresql-credentials
. This secret is marked with helm.sh/resource-policy: keep
to ensure the credentials persist across Helm upgrades.
If you want to manage the database credentials yourself, you can create a secret named postgresql-credentials
before installing the chart:
kubectl create secret generic postgresql-credentials \
--from-literal=user=saleor \
--from-literal=database=saleor \
--from-literal=password=your-password
Then configure the chart to use PostgreSQL but without specifying a password:
postgresql:
enabled: true
auth:
username: saleor
database: saleor
To use an external database, disable the built-in PostgreSQL and provide your database URL:
postgresql:
enabled: false
global:
database:
primaryUrl: "postgresql://user:password@your-db-host:5432/saleor"
# Optional: Add read replicas
replicaUrls:
- "postgresql://user:password@your-read-replica:5432/saleor"
- If you're using the built-in PostgreSQL and don't provide a password, a random one will be generated during the first installation
- The generated password will be preserved across Helm upgrades thanks to the
helm.sh/resource-policy: keep
annotation - If you need to rotate the password:
- Delete the existing
postgresql-credentials
secret - Either let the chart generate a new password or provide a new one in your values
- Upgrade the release
- Delete the existing
api:
enabled: true
replicaCount: 1
image:
repository: ghcr.io/saleor/saleor
tag: "3.19.0"
resources:
requests:
cpu: 500m
memory: 1Gi
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
dashboard:
enabled: true
replicaCount: 1
image:
repository: ghcr.io/saleor/saleor-dashboard
tag: "3.19.0"
resources:
requests:
cpu: 100m
memory: 128Mi
worker:
enabled: true
replicaCount: 1
resources:
requests:
cpu: 200m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 3
serviceMesh:
enabled: false
istio:
enabled: false
api:
circuitBreaker:
enabled: true
maxConnections: 100
timeout:
enabled: true
http: 10s
The chart supports both S3-compatible storage and Google Cloud Storage (GCS) for storing media and static files.
Enable S3 storage by configuring the following in your values file:
storage:
s3:
enabled: true
credentials:
accessKeyId: "your-access-key"
secretAccessKey: "your-secret-key"
config:
region: "us-east-1"
bucketName: "your-bucket-name"
# Optional configurations
staticBucketName: "your-static-bucket" # Separate bucket for static files
mediaBucketName: "your-media-bucket" # Separate bucket for media files
mediaPrivateBucketName: "private-bucket" # Separate bucket for private media
customDomain: "cdn.yourdomain.com" # Custom domain for serving files
defaultAcl: "public-read"
queryStringAuth: false
For more details on S3 configuration, see the official Saleor documentation.
To use Google Cloud Storage, configure the following:
storage:
gcs:
enabled: true
# When running on GKE with Workload Identity (recommended)
serviceAccount:
create: true
annotations:
iam.gke.io/gcp-service-account: saleor-gcs@YOUR_PROJECT_ID.iam.gserviceaccount.com
config:
bucketName: "your-bucket-name"
# Optional configurations
staticBucketName: "your-static-bucket"
mediaBucketName: "your-media-bucket"
mediaPrivateBucketName: "private-bucket"
customDomain: "cdn.yourdomain.com"
defaultAcl: "publicRead"
For more details on GCS configuration, see the official Saleor documentation.
Here's a complete example of S3 configuration using CloudFront for content delivery:
storage:
s3:
enabled: true
credentials:
accessKeyId: "AKIAIOSFODNN7EXAMPLE"
secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
config:
region: "us-east-1"
# Using separate buckets for different types of content
staticBucketName: "my-shop-static" # Public bucket for static files
mediaBucketName: "my-shop-media" # Public bucket for uploaded media
mediaPrivateBucketName: "my-shop-priv" # Private bucket for sensitive data
# Using CloudFront distributions for content delivery
customDomain: "static.myshop.com" # CloudFront domain for static files
mediaCustomDomain: "media.myshop.com" # CloudFront domain for media files
# Access control
defaultAcl: "public-read" # Make files publicly readable
queryStringAuth: false # Disable signed URLs
queryStringExpire: 3600 # 1 hour expiration for signed URLs (if enabled)
Here's an example using MinIO or other S3-compatible storage:
storage:
s3:
enabled: true
credentials:
accessKeyId: "minio-access-key"
secretAccessKey: "minio-secret-key"
config:
region: "us-east-1" # Required but might not be used
# Using a single bucket with different prefixes
staticBucketName: "saleor"
mediaBucketName: "saleor"
mediaPrivateBucketName: "saleor-private"
# Using custom domains
customDomain: "storage.example.com" # Domain for static files
mediaCustomDomain: "storage.example.com" # Domain for media files
# Access control
defaultAcl: "public-read"
queryStringAuth: false
queryStringExpire: 3600
Note: When using CloudFront or another CDN:
- Configure CORS appropriately for your domains
- Set up proper cache behaviors for static vs media content
- For private media, ensure the bucket is not publicly accessible
Django requires database migrations to be run after version upgrades. This chart provides two ways to handle migrations:
By default, the chart will automatically run migrations after installation and upgrades using a Kubernetes Job:
migrations:
enabled: true # Enable automatic migrations
# Additional environment variables specific to migrations
extraEnv: [] # Add migration-specific env vars if needed
resources: # Configure resources for the migration job
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 200m
memory: 512Mi
The migration job:
- Runs after install/upgrade using Helm hooks
- Uses the same image and environment variables as the API
- Inherits all environment variables from the API configuration
- Has access to all necessary secrets (database, Redis, JWT, etc.)
- Will be automatically cleaned up after successful completion
- Can be monitored using standard kubectl commands shown in the post-install notes
If you prefer to run migrations manually (e.g., in highly controlled environments), disable automatic migrations:
migrations:
enabled: false
Then run migrations manually when needed:
# Get the API pod name
POD=$(kubectl get pod -l app.kubernetes.io/component=api -o jsonpath="{.items[0].metadata.name}")
# Run migrations
kubectl exec -it $POD -- python manage.py migrate
The migration job is designed with safety in mind:
- It runs with
--no-input
to prevent hanging on user input - It uses the same image and environment as the API to ensure consistency
- It has access to all necessary configuration and secrets
- It's executed after the database is ready but before the new API version starts
- The job history is preserved for debugging purposes
After deploying Saleor for the first time, you'll need to create a superuser (admin) account to access the dashboard:
# Get the name of the API pod
POD=$(kubectl get pod -l app.kubernetes.io/component=api -o jsonpath="{.items[0].metadata.name}")
# Create a superuser
kubectl exec -it $POD -- python manage.py createsuperuser
Follow the prompts to create your admin account. You'll need to provide:
- Email address
- Password (minimum 8 characters)
Once created, you can use these credentials to log into the Saleor Dashboard.
Django requires specific configuration for allowed hosts and CORS. Configure these in your values file:
api:
extraEnv:
# For production, specify exact hostnames:
- name: ALLOWED_HOSTS
value: "your-domain.com,api.your-domain.com"
- name: ALLOWED_CLIENT_HOSTS
value: "dashboard.your-domain.com"
# For development only (not recommended for production):
- name: ALLOWED_HOSTS
value: "*"
- name: ALLOWED_CLIENT_HOSTS
value: "*"
Important Notes:
- Django's
ALLOWED_HOSTS
does not support wildcards like*.domain.com
- You must specify exact hostnames that will be used to access your Saleor instance
- For production, always specify exact domains rather than using
"*"
- The
ALLOWED_CLIENT_HOSTS
should match the domains from which your dashboard will access the API
The chart supports automatic TLS certificate management using cert-manager. By default, it's configured to use Let's Encrypt production certificates:
ingress:
api:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
tls:
- secretName: saleor-api-tls
hosts:
- your-api-domain.com
dashboard:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
tls:
- secretName: saleor-dashboard-tls
hosts:
- your-dashboard-domain.com
Important Notes:
- When TLS is enabled (by configuring
ingress.api.tls
), the dashboard will automatically use HTTPS to connect to the API - The dashboard's
API_URL
environment variable will be set tohttps://
orhttp://
based on TLS configuration - Both API and Dashboard should use TLS in production for security
Prerequisites:
- cert-manager must be installed in your cluster
- A cluster issuer named "letsencrypt-prod" must be configured
To use a different certificate issuer:
- Change the
cert-manager.io/cluster-issuer
annotation value - Or remove it to use the cluster default
- Or add
cert-manager.io/issuer
instead to use a namespace issuer
The TLS certificates will be stored in the specified secrets (saleor-api-tls
and saleor-dashboard-tls
by default).
- Configure appropriate resource requests and limits:
api:
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 2
memory: 4Gi
- Enable autoscaling:
api:
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
- Configure read replicas for improved performance:
postgresql:
architecture: replication
readReplicas:
replicaCount: 2
- Optimize PostgreSQL settings:
postgresql:
primary:
extendedConfiguration: |
work_mem = 64MB
maintenance_work_mem = 256MB
shared_buffers = 3000MB
max_connections = 150
- Enable service account:
serviceAccount:
create: true
annotations:
iam.gke.io/gcp-service-account: saleor-gcs@YOUR_PROJECT_ID.iam.gserviceaccount.com
- Configure security context:
podSecurityContext:
fsGroup: 1000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
The chart supports monitoring through various mechanisms:
- Kubernetes probes
- PostgreSQL metrics (when enabled)
- Service mesh telemetry (when enabled)
Major changes:
- Restructured values.yaml
- Added Dashboard component
- Added service mesh support
- Updated PostgreSQL configuration
- Added read replica support
To upgrade:
- Backup your values.yaml
- Review the new values.yaml structure
- Migrate your configurations
- Test in a staging environment
- Upgrade production:
helm upgrade my-saleor trieb-work/saleor --values values.yaml
-
Database connection issues:
- Verify database credentials
- Check network policies
- Validate connection strings
-
Resource constraints:
- Monitor resource usage
- Adjust requests/limits
- Enable autoscaling
-
Performance issues:
- Enable read replicas
- Configure service mesh
- Optimize PostgreSQL settings
For issues and feature requests, please:
- Check the documentation
- Open an issue in the GitHub repository
- Join the Saleor community