Appearance
Deployment
Production deployment guide for self-hosting the Pulse server.
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL | Yes | — | PostgreSQL connection string |
JWT_SECRET | Yes | — | Secret for signing SDK JWT tokens |
ADMIN_JWT_SECRET | Yes | — | Secret for signing admin panel JWT tokens |
PORT | No | 4567 | Server port |
REDIS_URL | No | — | Redis URL for pub/sub (enables horizontal scaling) |
NODE_ENV | No | — | Set to production for production mode |
ALLOWED_ORIGINS | No | * | Comma-separated allowed CORS origins |
S3_BUCKET | No | — | S3 bucket name (enables S3 storage) |
S3_REGION | No | us-east-1 | AWS S3 region |
AWS_ACCESS_KEY_ID | No | — | AWS access key (required when using S3) |
AWS_SECRET_ACCESS_KEY | No | — | AWS secret key (required when using S3) |
UPLOAD_DIR | No | ./uploads | Local file storage directory (used when S3 is not configured) |
Docker Compose (Production)
The repository includes a docker-compose.yml that runs PostgreSQL, Redis, and the Pulse server. For production, create an override or a dedicated file:
yaml
services:
redis:
image: redis:7-alpine
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
postgres:
image: postgres:17-alpine
restart: always
environment:
POSTGRES_USER: pulse
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: pulse
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U pulse"]
interval: 5s
timeout: 3s
retries: 5
server:
build: .
restart: always
ports:
- "4567:4567"
environment:
DATABASE_URL: "postgres://pulse:${POSTGRES_PASSWORD}@postgres:5432/pulse"
JWT_SECRET: "${JWT_SECRET}"
ADMIN_JWT_SECRET: "${ADMIN_JWT_SECRET}"
PORT: "4567"
REDIS_URL: "redis://redis:6379"
NODE_ENV: "production"
ALLOWED_ORIGINS: "https://yourdomain.com"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
volumes:
pgdata:Set the secrets in a .env file alongside your docker-compose.yml:
bash
JWT_SECRET=your-long-random-secret-here
ADMIN_JWT_SECRET=another-long-random-secret-here
POSTGRES_PASSWORD=a-strong-database-passwordStart with:
bash
docker compose up -dReverse Proxy (nginx)
Place nginx in front of Pulse to handle TLS and proxy WebSocket connections:
nginx
upstream pulse {
server 127.0.0.1:4567;
}
server {
listen 443 ssl http2;
server_name pulse.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/pulse.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/pulse.yourdomain.com/privkey.pem;
location / {
proxy_pass http://pulse;
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts for long-lived WebSocket connections
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}WebSocket upgrade
The proxy_set_header Upgrade and proxy_set_header Connection "upgrade" lines are required for WebSocket connections to work through the proxy.
S3 Storage
By default, uploaded files are stored on the local filesystem under UPLOAD_DIR (defaults to ./uploads). For production, use S3 or an S3-compatible provider (MinIO, DigitalOcean Spaces, Cloudflare R2):
bash
S3_BUCKET=my-pulse-uploads
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...When S3_BUCKET is set, the server automatically uses S3 instead of local storage. No other code changes are needed.
S3-compatible providers
For providers like MinIO or DigitalOcean Spaces, also set the S3_ENDPOINT environment variable to point to the provider's API endpoint.
Redis for Horizontal Scaling
A single Pulse server instance works out of the box. To run multiple instances behind a load balancer, set REDIS_URL so that WebSocket messages are broadcast across all instances via Redis pub/sub:
bash
REDIS_URL=redis://your-redis-host:6379With Redis configured:
- Presence updates are shared across all server instances
- Messages sent on one instance reach clients connected to other instances
- Any instance can serve any client — no sticky sessions required
Health Check
The server exposes a health endpoint at GET /health that verifies the database connection. Use it in your load balancer or orchestrator:
bash
curl http://localhost:4567/health
# { "status": "ok" }Returns 503 if the database is unreachable.
Security Checklist
Before going to production, verify the following:
- [ ] Change JWT secrets — Set
JWT_SECRETandADMIN_JWT_SECRETto long, random strings (at least 32 characters) - [ ] Set
ALLOWED_ORIGINS— Restrict CORS to your domain(s) instead of* - [ ] Set
NODE_ENV=production— Disables dev-only endpoints (/dev/token,/dev/api-key) - [ ] Use TLS — Serve over
wss://andhttps://via a reverse proxy - [ ] Secure the database — Use a strong
POSTGRES_PASSWORDand restrict network access - [ ] Protect secret keys — Keep
sk_keys on your backend only, never expose in frontend code - [ ] Set up S3 — Use S3 or S3-compatible storage so uploads survive container restarts
- [ ] Enable Redis — Required for running multiple server instances
- [ ] Configure backups — Set up regular PostgreSQL backups