Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Self Hosted API_JWT_JWKS fails if the provided JSON is an array #1269

Open
Towerful opened this issue Jan 17, 2025 · 3 comments
Open

Self Hosted API_JWT_JWKS fails if the provided JSON is an array #1269

Towerful opened this issue Jan 17, 2025 · 3 comments
Labels
bug Something isn't working

Comments

@Towerful
Copy link

Towerful commented Jan 17, 2025

Bug report

  • [Y] I confirm this is a bug with Supabase, not with my own application.
  • [Y] I confirm I have searched the Docs, GitHub Discussions, and Discord.

Describe the bug

supabase/auth expects the GOTRUE_JWT_KEYS to be an array of key objects.
supabase/realtime expects the API_JWT_JWKS to be an object of a single key.

Passing an array of keys to supabase/realtime API_JWT_JWKS fails self hosted migrations:

/app/lib/realtime-2.33.70/priv/repo/seeds.exs:40:

Errors
realtime-dev.supabase-realtime  | 
realtime-dev.supabase-realtime  |     %{jwt_jwks: [{"is invalid", [type: :map, validation: :cast]}]}

To Reproduce

I started with the default supabase/supabase/docker deployment.
I then generated a JWKS and signed an anon & service_role key.
I then swapped out everything from using a symmetrical JWT secret (set to "garbage" where a value is required, as I'm still working through all this), to use JWKS.

docker compose:

# Usage
#   Start:              docker compose up
#   With helpers:       docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml up
#   Stop:               docker compose down
#   Destroy:            docker compose -f docker-compose.yml -f ./dev/docker-compose.dev.yml down -v --remove-orphans
#   Reset everything:  ./reset.sh

name: supabase

services:
  studio:
    container_name: supabase-studio
    image: supabase/studio:20250113-83c9420
    restart: unless-stopped
    healthcheck:
      test:
        [
          "CMD",
          "node",
          "-e",
          "fetch('http://studio:3000/api/profile').then((r) => {if (r.status !== 200) throw new Error(r.status)})"
        ]
      timeout: 10s
      interval: 5s
      retries: 3
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      STUDIO_PG_META_URL: http://meta:8080
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

      DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
      DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}
      OPENAI_API_KEY: ${OPENAI_API_KEY:-}

      SUPABASE_URL: http://kong:8000
      SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      AUTH_JWT_SECRET: ${JWT_SECRET}

      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
      LOGFLARE_URL: http://analytics:4000
      NEXT_PUBLIC_ENABLE_LOGS: true
      # Comment to use Big Query backend for analytics
      NEXT_ANALYTICS_BACKEND_PROVIDER: postgres
      # Uncomment to use Big Query backend for analytics
      # NEXT_ANALYTICS_BACKEND_PROVIDER: bigquery

  kong:
    container_name: supabase-kong
    image: kong:2.8.1
    restart: unless-stopped
    # https://unix.stackexchange.com/a/294837
    entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'
    ports:
      - ${KONG_HTTP_PORT}:8000/tcp
      - ${KONG_HTTPS_PORT}:8443/tcp
    depends_on:
      analytics:
        condition: service_healthy
    environment:
      KONG_DATABASE: "off"
      KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
      # https://github.com/supabase/cli/issues/14
      KONG_DNS_ORDER: LAST,A,CNAME
      KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth
      KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
      KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
      DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
    volumes:
      # https://github.com/supabase/supabase/issues/12661
      - ./volumes/api/kong.yml:/home/kong/temp.yml:ro

  auth:
    container_name: supabase-auth
    image: supabase/gotrue:v2.167.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:9999/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      GOTRUE_API_HOST: 0.0.0.0
      GOTRUE_API_PORT: 9999
      API_EXTERNAL_URL: ${API_EXTERNAL_URL}

      GOTRUE_DB_DRIVER: postgres
      GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}

      GOTRUE_SITE_URL: ${SITE_URL}
      GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}
      GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}

      GOTRUE_JWT_ADMIN_ROLES: service_role
      GOTRUE_JWT_AUD: authenticated
      GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
      GOTRUE_JWT_EXP: ${JWT_EXPIRY}
      GOTRUE_JWT_KEYS: ${JWT_KEYS}
      GOTRUE_JWT_SECRET: "garbage"

      GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
      GOTRUE_EXTERNAL_ANONYMOUS_USERS_ENABLED: ${ENABLE_ANONYMOUS_USERS}
      GOTRUE_MAILER_AUTOCONFIRM: ${ENABLE_EMAIL_AUTOCONFIRM}

      # Uncomment to bypass nonce check in ID Token flow. Commonly set to true when using Google Sign In on mobile.
      # GOTRUE_EXTERNAL_SKIP_NONCE_CHECK: true

      # GOTRUE_MAILER_SECURE_EMAIL_CHANGE_ENABLED: true
      # GOTRUE_SMTP_MAX_FREQUENCY: 1s
      GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
      GOTRUE_SMTP_HOST: ${SMTP_HOST}
      GOTRUE_SMTP_PORT: ${SMTP_PORT}
      GOTRUE_SMTP_USER: ${SMTP_USER}
      GOTRUE_SMTP_PASS: ${SMTP_PASS}
      GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}
      GOTRUE_MAILER_URLPATHS_INVITE: ${MAILER_URLPATHS_INVITE}
      GOTRUE_MAILER_URLPATHS_CONFIRMATION: ${MAILER_URLPATHS_CONFIRMATION}
      GOTRUE_MAILER_URLPATHS_RECOVERY: ${MAILER_URLPATHS_RECOVERY}
      GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: ${MAILER_URLPATHS_EMAIL_CHANGE}

      GOTRUE_EXTERNAL_PHONE_ENABLED: ${ENABLE_PHONE_SIGNUP}
      GOTRUE_SMS_AUTOCONFIRM: ${ENABLE_PHONE_AUTOCONFIRM}
      # Uncomment to enable custom access token hook. Please see: https://supabase.com/docs/guides/auth/auth-hooks for full list of hooks and additional details about custom_access_token_hook

      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_ENABLED: "true"
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_URI: "pg-functions://postgres/public/custom_access_token_hook"
      # GOTRUE_HOOK_CUSTOM_ACCESS_TOKEN_SECRETS: "<standard-base64-secret>"

      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_ENABLED: "true"
      # GOTRUE_HOOK_MFA_VERIFICATION_ATTEMPT_URI: "pg-functions://postgres/public/mfa_verification_attempt"

      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_ENABLED: "true"
      # GOTRUE_HOOK_PASSWORD_VERIFICATION_ATTEMPT_URI: "pg-functions://postgres/public/password_verification_attempt"

      # GOTRUE_HOOK_SEND_SMS_ENABLED: "false"
      # GOTRUE_HOOK_SEND_SMS_URI: "pg-functions://postgres/public/custom_access_token_hook"
      # GOTRUE_HOOK_SEND_SMS_SECRETS: "v1,whsec_VGhpcyBpcyBhbiBleGFtcGxlIG9mIGEgc2hvcnRlciBCYXNlNjQgc3RyaW5n"

      # GOTRUE_HOOK_SEND_EMAIL_ENABLED: "false"
      # GOTRUE_HOOK_SEND_EMAIL_URI: "http://host.docker.internal:54321/functions/v1/email_sender"
      # GOTRUE_HOOK_SEND_EMAIL_SECRETS: "v1,whsec_VGhpcyBpcyBhbiBleGFtcGxlIG9mIGEgc2hvcnRlciBCYXNlNjQgc3RyaW5n"

  rest:
    container_name: supabase-rest
    image: postgrest/postgrest:v12.2.0
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_KEYS}
      PGRST_DB_USE_LEGACY_GUCS: "false"
      PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_KEYS}
      PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}
    command: "postgrest"

  realtime:
    # This container name looks inconsistent but is correct because realtime constructs tenant id by parsing the subdomain
    container_name: realtime-dev.supabase-realtime
    image: supabase/realtime:v2.33.70
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "-sSfL",
          "--head",
          "-o",
          "/dev/null",
          "-H",
          "Authorization: Bearer ${ANON_KEY}",
          "http://localhost:4000/api/tenants/realtime-dev/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      PORT: 4000
      DB_HOST: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_USER: supabase_admin
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_NAME: ${POSTGRES_DB}
      DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
      DB_ENC_KEY: supabaserealtime
      API_JWT_JWKS: ${JWT_KEYS}
      API_JWT_SECRET: "garbage"
      SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
      ERL_AFLAGS: -proto_dist inet_tcp
      DNS_NODES: "''"
      RLIMIT_NOFILE: "10000"
      APP_NAME: realtime
      SEED_SELF_HOST: true
      RUN_JANITOR: true

  # To use S3 backed storage: docker compose -f docker-compose.yml -f docker-compose.s3.yml up
  storage:
    container_name: supabase-storage
    image: supabase/storage-api:v1.14.5
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      rest:
        condition: service_started
      imgproxy:
        condition: service_started
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://storage:5000/status"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      ANON_KEY: ${ANON_KEY}
      SERVICE_KEY: ${SERVICE_ROLE_KEY}
      POSTGREST_URL: http://rest:3000
      PGRST_JWT_SECRET: "garbage"
      JWT_JWKS: ${JWT_KEYS}
      DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      FILE_SIZE_LIMIT: 52428800
      STORAGE_BACKEND: file
      FILE_STORAGE_BACKEND_PATH: /var/lib/storage
      TENANT_ID: stub
      # TODO: https://github.com/supabase/storage-api/issues/55
      REGION: stub
      GLOBAL_S3_BUCKET: stub
      ENABLE_IMAGE_TRANSFORMATION: "true"
      IMGPROXY_URL: http://imgproxy:5001
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  imgproxy:
    container_name: supabase-imgproxy
    image: darthsim/imgproxy:v3.8.0
    healthcheck:
      test: [ "CMD", "imgproxy", "health" ]
      timeout: 5s
      interval: 5s
      retries: 3
    environment:
      IMGPROXY_BIND: ":5001"
      IMGPROXY_LOCAL_FILESYSTEM_ROOT: /
      IMGPROXY_USE_ETAG: "true"
      IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}
    volumes:
      - ./volumes/storage:/var/lib/storage:z

  meta:
    container_name: supabase-meta
    image: supabase/postgres-meta:v0.84.2
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      analytics:
        condition: service_healthy
    restart: unless-stopped
    environment:
      PG_META_PORT: 8080
      PG_META_DB_HOST: ${POSTGRES_HOST}
      PG_META_DB_PORT: ${POSTGRES_PORT}
      PG_META_DB_NAME: ${POSTGRES_DB}
      PG_META_DB_USER: supabase_admin
      PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}

  # functions:
  #   container_name: supabase-edge-functions
  #   image: supabase/edge-runtime:v1.66.4
  #   restart: unless-stopped
  #   depends_on:
  #     analytics:
  #       condition: service_healthy
  #   environment:
  #     JWT_SECRET: ${JWT_SECRET}
  #     SUPABASE_URL: http://kong:8000
  #     SUPABASE_ANON_KEY: ${ANON_KEY}
  #     SUPABASE_SERVICE_ROLE_KEY: ${SERVICE_ROLE_KEY}
  #     SUPABASE_DB_URL: postgresql://postgres:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
  #     # TODO: Allow configuring VERIFY_JWT per function. This PR might help: https://github.com/supabase/cli/pull/786
  #     VERIFY_JWT: "${FUNCTIONS_VERIFY_JWT}"
  #   volumes:
  #     - ./volumes/functions:/home/deno/functions:Z
  #   command:
  #     - start
  #     - --main-service
  #     - /home/deno/functions/main

  analytics:
    container_name: supabase-analytics
    image: supabase/logflare:1.4.0
    healthcheck:
      test: [ "CMD", "curl", "http://localhost:4000/health" ]
      timeout: 5s
      interval: 5s
      retries: 10
    restart: unless-stopped
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
    # Uncomment to use Big Query backend for analytics
    # volumes:
    #   - type: bind
    #     source: ${PWD}/gcloud.json
    #     target: /opt/app/rel/logflare/bin/gcloud.json
    #     read_only: true
    environment:
      LOGFLARE_NODE_HOST: 127.0.0.1
      DB_USERNAME: supabase_admin
      DB_DATABASE: _supabase
      DB_HOSTNAME: ${POSTGRES_HOST}
      DB_PORT: ${POSTGRES_PORT}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_SCHEMA: _analytics
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
      LOGFLARE_SINGLE_TENANT: true
      LOGFLARE_SUPABASE_MODE: true
      LOGFLARE_MIN_CLUSTER_SIZE: 1

      # Comment variables to use Big Query backend for analytics
      POSTGRES_BACKEND_URL: postgresql://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/_supabase
      POSTGRES_BACKEND_SCHEMA: _analytics
      LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true
      # Uncomment to use Big Query backend for analytics
      # GOOGLE_PROJECT_ID: ${GOOGLE_PROJECT_ID}
      # GOOGLE_PROJECT_NUMBER: ${GOOGLE_PROJECT_NUMBER}
    ports:
      - 4000:4000

  # Comment out everything below this point if you are using an external Postgres database
  db:
    container_name: supabase-db
    image: supabase/postgres:15.8.1.020
    healthcheck:
      test: pg_isready -U postgres -h localhost
      interval: 5s
      timeout: 5s
      retries: 10
    depends_on:
      vector:
        condition: service_healthy
    command:
      - postgres
      - -c
      - config_file=/etc/postgresql/postgresql.conf
      - -c
      - log_min_messages=fatal # prevents Realtime polling queries from appearing in logs
    restart: unless-stopped
    environment:
      POSTGRES_HOST: /var/run/postgresql
      PGPORT: ${POSTGRES_PORT}
      POSTGRES_PORT: ${POSTGRES_PORT}
      PGPASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      PGDATABASE: ${POSTGRES_DB}
      POSTGRES_DB: ${POSTGRES_DB}
      JWT_SECRET: ${JWT_SECRET}
      JWT_EXP: ${JWT_EXPIRY}
    volumes:
      - ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
      # Must be superuser to create event trigger
      - ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
      # Must be superuser to alter reserved role
      - ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
      # Initialize the database settings with JWT_SECRET and JWT_EXP
      - ./volumes/db/jwt.sql:/docker-entrypoint-initdb.d/init-scripts/99-jwt.sql:Z
      # PGDATA directory is persisted between restarts
      - ./volumes/db/data:/var/lib/postgresql/data:Z
      # Changes required for internal supabase data such as _analytics
      - ./volumes/db/_supabase.sql:/docker-entrypoint-initdb.d/migrations/97-_supabase.sql:Z
      # Changes required for Analytics support
      - ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
      # Changes required for Pooler support
      - ./volumes/db/pooler.sql:/docker-entrypoint-initdb.d/migrations/99-pooler.sql:Z
      # Use named volume to persist pgsodium decryption key between restarts
      - db-config:/etc/postgresql-custom

  vector:
    container_name: supabase-vector
    image: timberio/vector:0.28.1-alpine
    healthcheck:
      test:
        [

          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://vector:9001/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    volumes:
      - ./volumes/logs/vector.yml:/etc/vector/vector.yml:ro
      - ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro
    environment:
      LOGFLARE_API_KEY: ${LOGFLARE_API_KEY}
    command: [ "--config", "/etc/vector/vector.yml" ]

  # Update the DATABASE_URL if you are using an external Postgres database
  supavisor:
    container_name: supabase-pooler
    image: supabase/supavisor:1.1.56
    healthcheck:
      test: curl -sSfL --head -o /dev/null "http://127.0.0.1:4000/api/health"
      interval: 10s
      timeout: 5s
      retries: 5
    depends_on:
      db:
        condition: service_healthy
      analytics:
        condition: service_healthy
    command:
      - /bin/sh
      - -c
      - /app/bin/migrate && /app/bin/supavisor eval "$$(cat /etc/pooler/pooler.exs)" && /app/bin/server
    restart: unless-stopped
    ports:
      - ${POSTGRES_PORT}:5432
      - ${POOLER_PROXY_PORT_TRANSACTION}:6543
    environment:
      - PORT=4000
      - POSTGRES_PORT=${POSTGRES_PORT}
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - DATABASE_URL=ecto://supabase_admin:${POSTGRES_PASSWORD}@db:${POSTGRES_PORT}/_supabase
      - CLUSTER_POSTGRES=true
      - SECRET_KEY_BASE=UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
      - VAULT_ENC_KEY=your-encryption-key-32-chars-min
      - API_JWT_SECRET=${JWT_SECRET}
      - METRICS_JWT_SECRET=${JWT_SECRET}
      - REGION=local
      - ERL_AFLAGS=-proto_dist inet_tcp
      - POOLER_TENANT_ID=${POOLER_TENANT_ID}
      - POOLER_DEFAULT_POOL_SIZE=${POOLER_DEFAULT_POOL_SIZE}
      - POOLER_MAX_CLIENT_CONN=${POOLER_MAX_CLIENT_CONN}
      - POOLER_POOL_MODE=transaction
    volumes:
      - ./volumes/pooler/pooler.exs:/etc/pooler/pooler.exs:ro

volumes:
  db-config:

.env file:

############
# Secrets
# YOU MUST CHANGE THESE BEFORE GOING INTO PRODUCTION
############

POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password

JWT_KEYS=[{"use":"sig","kty":"RSA","kid":"8sHzl0IWZfVutBjRNXBf6uw4OtMxkrkLhhR1HJPWbmQ","alg":"RS256","n":"8z7yqkn_u_lO6ql7C7Q8b3IhFMhr1cg-wKmTrMsRDndpNaQw3JHR9U44w5QTDwukLT3veZl-bu8BXo_Wc_J0Dx7Ajc3ddYNi4qWTzWP4uimqew4Ir4RBa3iRCsUylv8GUbKKItS5eUf3EqtRogVCth7xCvlHVxvV3G5hsEbS5W7X2FmOO6sAicLraGiMrYLBFss37QGYdjN9CZauu_Xfx14-m-cJ5AO_Ir5Mkgcl24bQm9KuAzpGibL26_RrR9pdZTvh4fBy0IYaGV1IYDDc5JHhY2UfIqqq6GYkhZ9C2vuTjkpooey050tYGTYuY26_z1yo-2tShmvXaLjYgmphiw","e":"AQAB","d":"mFxDb3quXrWIQuApnGkmub_JDNWFBgFJnTAauc7wPhl5ownXOTF1S6vVTlv_nBr0mQoEaCxGz4GRYAPElhe1roranXfnUWYcmE6SR8Jo12Kl0DI4Kogy2fhJEW_3gjD3alDkyXBpRJhZIC6DEXMuGBlFblQ55UwgJtRVCC80hlQw4prqERGCIZWLriJU_Nb2pPUE_xhWdVhiU-slSxbS_0B-84W_eOpQbOAeAEGmzzi3rhDt9G-ow2uwxQgphj5xp03ThXitnytcp5RmYKdBPPeaxFDQ5v4CNli0XYvk1uNuMveSnClkBCxaHN_O6eWY3SbSgTdENKwh744Iqeg3YQ","p":"_dJktEUmSNTQXAORYRTmhrvkI-HV0-vOumsFkpryutfpWMPgeciS0pWhqP0wk5Xy5sAPlJ1ioMmeXiZEligid0_tP7kKH8-ElF3ekSDwD-beIGvkWBXYnZNsd-UrKBKNQw7J-86bah502BOteRaTkko9E44aAozGuvgBOC0Z62M","q":"9VVSITgQhyTqfrUjlTgfYWhDTQrs-JQHl4y1mN2sCP8M7_8V34DrYlKP573mbgvB_n7mW-47i6Nwx9B4Rq8w2J1JYJiA0_JRz0CiI2Pv9D2rOWPkedIrL6qxlzwVvz3fBcKBgBjm8x4MYUGtZ2XypYBvhc-j-M357vRnM59bzbk","dp":"o7-gCEy0LjhdU39Zwu_g6Ps-a4e-k0GF1O5GYhZkkfXJLOLxZp_nWMP_zy3IsO4EDqnJY29FucVYzhSSGu05jw-ZV4rg5TTTq4QDmk1NknS2yOPSJKGzZbU-PPszpF6Tk7dux2y7BvMvHldTitLt0WrjjEIYtZxseSKWZs9x8VE","dq":"eEiwz-CxGdGbtywQmiS-HgAEn01wCiBp6H_wuVZV9sM2EKU8kCyhO7_HFpQg2muhXanSP9h6EWi87vrjPaS_ijTzuQyMfV4dhkPmOvvQtitWO_kiGChXTDOghsnKz80B_8zxuWB8O07MOxL8demiIkrqYuz_NAmpNONXhhPn6uE","qi":"UskQcjxeNNzGGPHI8cwYG29ZcSRgfE7aPXrczw2ndAUw4eo9Z_IDKbP8x6sjq8PQQkrhfwl9ZiVeGin1U8PNfXFrYN4a4Jw0kl0bwJkfMJFiSeCIXwrDXbhvfHDHQbMrMjABRy4CYIoYroq8URgJyj2bPwebcBdHffMJGJznVMY","key_ops":["sign","verify"]}]
ANON_KEY=eyJhbGciOiJSUzI1NiIsImtpZCI6IjhzSHpsMElXWmZWdXRCalJOWEJmNnV3NE90TXhrcmtMaGhSMUhKUFdibVEiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJzdXBhYmFzZSIsImV4cCI6MTc2ODY2OTQxMywiaWF0IjoxNzM3MTMzNDEzLCJpc3MiOiJzdXBhYmFzZSIsImp0aSI6IjY2MTMyNDVjNmRiMTk1N2U0ZTE0NGRmYzA5MjUyMGIzOTFiMDhjNWQ2ZDUyZGJlMTVhNjliOWRhZDdkNGI2NjUiLCJuYmYiOjE3MzcxMzM0MTMsInJvbGUiOiJhbm9uIiwic3ViIjoic3VwYWJhc2UifQ.B9TPA_uRETKKAVEYknUWPS1fOa4YEH-wI42X7i7OfTKNQIpVQY0ftRvQ-QDJD1oWYjB4cOt4SKqNjk0jKR__8nVI7wEJoHrwwxP55pvC3bTtFQLYwZMh85_9R5QGsLz0Y4MDzLAkf30XCX5wWwkvy_FijD2puSQuYS6400hBKQ4XPtA6b2frb7NjFGR4xmNPqL0KkMhtHNmE0HdD-EAvea8PN_oGsTuraEcDLwGsl5oAkEJT8ylyYpZvlLphURlIvclj1hcinOvraM7Cy782TQwVjqk7dO4Ad0t_8s-jlQDpwAU8xwFXyuVi19KeuPtYcxKxCyCEBFw9dky_FIPbuQ
SERVICE_ROLE_KEY=eyJhbGciOiJSUzI1NiIsImtpZCI6IjhzSHpsMElXWmZWdXRCalJOWEJmNnV3NE90TXhrcmtMaGhSMUhKUFdibVEiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJzdXBhYmFzZSIsImV4cCI6MTc2ODY2OTQxMywiaWF0IjoxNzM3MTMzNDEzLCJpc3MiOiJzdXBhYmFzZSIsImp0aSI6Ijg3MGIxNmEyZGEyOGFjNDk4NGI1NjVmZWFkZTQ5NmZjZDFlYzQ3Yzk5ZDFhNzY1YjlkMTYzYTkwNTgxNmYyMDQiLCJuYmYiOjE3MzcxMzM0MTMsInJvbGUiOiJzZXJ2aWNlX3JvbGUiLCJzdWIiOiJzdXBhYmFzZSJ9.iHJ_sHz0XuK3_iXjUzKnLwR0gyUEa29tsPkRuFE23-Th3zAvmi-RTogMIZ_caawFhuCYlvTCjScuxSd0OX4uMvX2ZRWhh9PslNShkWgWtRvKHCdDIeUk-3er2tJpkxEhFJJ5OwyLiVcf8E38ACcxGv8j9-0MS7qtldH1BlGfyqKrKGLPga2s8Ej2XoppgLQ4U4AhiNZTeZLpCl3bFwIsRbZ6InEAieDks4AZTDWayzfKL_lFQIy1tfLNWTYyZjYzUyKgRe1fZbWBV6dIHMYleraNv3FLDHBkngThSHBe6vRQfSyk7o1yockjV67HepAD3H0s-zosUhRZ2ggQZndiFg
DASHBOARD_USERNAME=supabase
DASHBOARD_PASSWORD=this_password_is_insecure_and_should_be_updated

############
# Database - You can change these to any PostgreSQL database that has logical replication enabled.
############

POSTGRES_HOST=db
POSTGRES_DB=postgres
POSTGRES_PORT=5432
# default user is postgres

############
# Supavisor -- Database pooler
############
POOLER_PROXY_PORT_TRANSACTION=6543
POOLER_DEFAULT_POOL_SIZE=20
POOLER_MAX_CLIENT_CONN=100
POOLER_TENANT_ID=your-tenant-id


############
# API Proxy - Configuration for the Kong Reverse proxy.
############

KONG_HTTP_PORT=8000
KONG_HTTPS_PORT=8443


############
# API - Configuration for PostgREST.
############

PGRST_DB_SCHEMAS=public,storage,graphql_public


############
# Auth - Configuration for the GoTrue authentication server.
############

## General
SITE_URL=http://localhost:3000
ADDITIONAL_REDIRECT_URLS=
JWT_EXPIRY=3600
DISABLE_SIGNUP=false
API_EXTERNAL_URL=http://localhost:8000

## Mailer Config
MAILER_URLPATHS_CONFIRMATION="/auth/v1/verify"
MAILER_URLPATHS_INVITE="/auth/v1/verify"
MAILER_URLPATHS_RECOVERY="/auth/v1/verify"
MAILER_URLPATHS_EMAIL_CHANGE="/auth/v1/verify"

## Email auth
ENABLE_EMAIL_SIGNUP=true
ENABLE_EMAIL_AUTOCONFIRM=false
[email protected]
SMTP_HOST=supabase-mail
SMTP_PORT=2500
SMTP_USER=fake_mail_user
SMTP_PASS=fake_mail_password
SMTP_SENDER_NAME=fake_sender
ENABLE_ANONYMOUS_USERS=false

## Phone auth
ENABLE_PHONE_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true


############
# Studio - Configuration for the Dashboard
############

STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project

STUDIO_PORT=3000
# replace if you intend to use Studio outside of localhost
SUPABASE_PUBLIC_URL=http://localhost:8000

# Enable webp support
IMGPROXY_ENABLE_WEBP_DETECTION=true

# Add your OpenAI API key to enable SQL Editor Assistant
OPENAI_API_KEY=

############
# Functions - Configuration for Functions
############
# NOTE: VERIFY_JWT applies to all functions. Per-function VERIFY_JWT is not supported yet.
FUNCTIONS_VERIFY_JWT=false

############
# Logs - Configuration for Logflare
# Please refer to https://supabase.com/docs/reference/self-hosting-analytics/introduction
############

LOGFLARE_LOGGER_BACKEND_API_KEY=your-super-secret-and-long-logflare-key

# Change vector.toml sinks to reflect this change
LOGFLARE_API_KEY=your-super-secret-and-long-logflare-key

# Docker socket location - this value will differ depending on your OS
DOCKER_SOCKET_LOCATION=/var/run/docker.sock

# Google Cloud Project details
GOOGLE_PROJECT_ID=GOOGLE_PROJECT_ID
GOOGLE_PROJECT_NUMBER=GOOGLE_PROJECT_NUMBER

Expected behavior

I would expect a JWKS env var to accept an array of valid keys, as multiple keys can be valid at the same time (ie during a key cycling procedure).
Its also awkward that it accepts a different format of JWKS from other services.

Removing the JSON array wrapper from the .env JWT_KEYS (ie JWT_KEYS={"use":"sig",...snip...}) allows realtime to apply the migration.
However, that breaks supabase/auth, requiring its env var to be set as: GOTRUE_JWT_KEYS: "[${JWT_KEYS}"]

As I am working on getting supabase deployed on kubernetes, this will make generating relevant secrets very difficult.

Screenshots

If applicable, add screenshots to help explain your problem.

System information

linux/docker

Additional context

Add any other context about the problem here.

EDIT:
Sorry, I accidentally submitted before I was finished

@Towerful Towerful added the bug Something isn't working label Jan 17, 2025
@filipecabaco
Copy link
Member

Hi I received a warning about a potentially leak of a key, please do check it's not a sensitive secret.

I will look into the bug as soon as I can 👍

@Towerful
Copy link
Author

Hi I received a warning about a potentially leak of a key, please do check it's not a sensitive secret.

I will look into the bug as soon as I can 👍

Ah, sorry about that. Thanks for checking in.
They are all throw-away keys.

@filipecabaco
Copy link
Member

👍and will check the bug when possible thank you for reporting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants