Deployment Guide
Step-by-step instructions to deploy inferonIQ in your infrastructure. Choose the deployment model that fits your organization.
Prerequisites
All deployment options require:
- PostgreSQL 15+ database (Supabase-managed or self-hosted) with
pgvector,pg_trgm, anduuid-osspextensions - Anthropic API key (Claude Sonnet/Haiku)
- OpenAI API key (embeddings + GPT-4o Mini)
- Container runtime (Docker 24+) for all options except bare-metal
- At least 2 GB RAM and 1 vCPU per app instance
Environment Variables
Every deployment requires these environment variables to be set on the application container:
# Required
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOi...
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOi...
ANTHROPIC_API_KEY=sk-ant-api...
OPENAI_API_KEY=sk-...
# Optional
LICENSE_ENFORCEMENT_MODE=soft # "soft" or "hard"Option 1: On-Premises (Docker Compose)
The simplest deployment — runs the full stack on a single server.
Step 1: Prepare the server
# Requirements: Linux server with Docker 24+ and Docker Compose v2
# Minimum: 4 GB RAM, 2 vCPU, 50 GB disk
# Install Docker (if not already)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USERStep 2: Clone the repository
git clone <repo-url> /opt/inferoniq
cd /opt/inferoniqStep 3: Configure environment
cp .env.example .env
# Edit .env with your secrets (see Prerequisites above)Step 4: Start all services
docker compose up -d --build
# Verify all containers are running
docker compose ps
# Expected output:
# NAME STATUS
# inferoniq-app Up (healthy)
# inferoniq-db Up (healthy)
# inferoniq-studio UpStep 5: Run migrations
# Migrations auto-apply via the mounted volume.
# To verify:
docker compose exec db psql -U postgres -d inferoniq \
-c "SELECT tablename FROM pg_tables WHERE schemaname='public' ORDER BY tablename;" | head -20Step 6: Seed demo data (optional)
docker compose exec app npx tsx scripts/seed-demo-data.tsStep 7: Access
| Service | URL |
|---|---|
| Application | http://<server-ip>:3000 |
| Supabase Studio | http://<server-ip>:3001 |
| PostgreSQL | <server-ip>:5432 |
Step 8: Manage
# View logs
docker compose logs -f app
# Restart
docker compose restart app
# Update to new version
git pull
docker compose up -d --build
# Stop everything
docker compose down
# Stop and delete all data (destructive)
# docker compose down -vOption 2: Kubernetes / Helm
Production-grade deployment with autoscaling, rolling updates, and ingress.
Step 1: Prerequisites
- Kubernetes cluster 1.24+ (EKS, AKS, GKE, or self-managed)
- Helm 3 installed
- Container registry with your inferonIQ image
- External PostgreSQL (Supabase-hosted recommended)
Step 2: Build & push the Docker image
docker build -t your-registry.com/inferoniq:1.0.0 .
docker push your-registry.com/inferoniq:1.0.0Step 3: Create Kubernetes secrets
kubectl create namespace inferoniq
kubectl -n inferoniq create secret generic inferoniq-secrets \
--from-literal=SUPABASE_SERVICE_ROLE_KEY='your-key' \
--from-literal=ANTHROPIC_API_KEY='sk-ant-...' \
--from-literal=OPENAI_API_KEY='sk-...'
kubectl -n inferoniq create secret generic inferoniq-db-secret \
--from-literal=postgres-password='your-db-password'Step 4: Deploy with Helm
cd deploy/helm
helm install inferoniq . \
--namespace inferoniq \
--set image.repository=your-registry.com/inferoniq \
--set image.tag=1.0.0 \
--set env.NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co \
--set env.NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-keyStep 5: Configure Ingress (optional)
Edit deploy/helm/values.yaml to set your domain and TLS:
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: inferoniq.yourcompany.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: inferoniq-tls
hosts:
- inferoniq.yourcompany.comStep 6: Verify
kubectl -n inferoniq get pods
kubectl -n inferoniq get svc
curl https://inferoniq.yourcompany.com/api/healthStep 7: Upgrade
helm upgrade inferoniq . \
--namespace inferoniq \
--set image.tag=1.1.0Helm values reference
| Parameter | Default | Description |
|---|---|---|
| replicaCount | 2 | Number of pod replicas |
| autoscaling.enabled | true | Enable Horizontal Pod Autoscaler |
| autoscaling.maxReplicas | 10 | Maximum pods under load |
| resources.requests.cpu | 250m | CPU request per pod |
| resources.requests.memory | 512Mi | Memory request per pod |
| resources.limits.cpu | 1000m | CPU limit per pod |
| resources.limits.memory | 1Gi | Memory limit per pod |
| ingress.hosts[0].host | inferoniq.example.com | Your domain |
Option 3: AWS (ECS Fargate + RDS)
Serverless container deployment with managed PostgreSQL on AWS.
Step 1: Prerequisites
- AWS account with IAM admin access
- Terraform 1.5+ installed
- AWS CLI configured (
aws configure) - Docker image pushed to ECR
Step 2: Push image to ECR
# Create ECR repository
aws ecr create-repository --repository-name inferoniq
# Login, build, tag, push
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin <account-id>.dkr.ecr.us-east-1.amazonaws.com
docker build -t inferoniq .
docker tag inferoniq:latest <account-id>.dkr.ecr.us-east-1.amazonaws.com/inferoniq:latest
docker push <account-id>.dkr.ecr.us-east-1.amazonaws.com/inferoniq:latestStep 3: Create terraform.tfvars
# deploy/terraform/aws/terraform.tfvars
image_uri = "<account-id>.dkr.ecr.us-east-1.amazonaws.com/inferoniq:latest"
db_password = "your-secure-db-password"
supabase_url = "https://your-project.supabase.co"
supabase_anon = "your-anon-key"
supabase_srv = "your-service-role-key"
anthropic_key = "sk-ant-..."
openai_key = "sk-..."Step 4: Deploy with Terraform
cd deploy/terraform/aws
terraform init
terraform plan # Review the plan
terraform apply # Type "yes" to confirm
# Outputs:
# alb_dns_name = "inferoniq-123456.us-east-1.elb.amazonaws.com"
# db_endpoint = "inferoniq-db.abc123.us-east-1.rds.amazonaws.com:5432"
# s3_bucket = "inferoniq-documents"Step 5: Run migrations against RDS
# Connect to RDS and run migrations
for f in supabase/migrations/*.sql; do
psql "postgresql://inferoniq:$DB_PASSWORD@<db-endpoint>/inferoniq" -f "$f"
doneResources created
- VPC with 2 public + 2 private subnets
- RDS PostgreSQL 15 (Multi-AZ, encrypted at rest)
- ECS Fargate cluster with 2 tasks
- Application Load Balancer
- S3 bucket (AES-256 encrypted, no public access)
- CloudWatch log group (30-day retention)
Estimated monthly cost
| Service | Configuration | Estimated Cost |
|---|---|---|
| ECS Fargate (2 tasks) | 1 vCPU, 2GB RAM each | ~$60 |
| RDS PostgreSQL | db.t3.medium, Multi-AZ | ~$130 |
| ALB | Application Load Balancer | ~$20 |
| S3 | Document storage | ~$5 |
| Total | ~$215/mo |
Option 4: Azure (Container Apps)
Serverless containers with managed PostgreSQL on Azure.
Step 1: Prerequisites
- Azure subscription
- Terraform 1.5+ installed
- Azure CLI configured (
az login) - Docker image pushed to Azure Container Registry
Step 2: Push image to ACR
# Create ACR
az acr create -n inferoniqcr -g your-rg --sku Basic
# Login and push
az acr login -n inferoniqcr
docker tag inferoniq inferoniqcr.azurecr.io/inferoniq:latest
docker push inferoniqcr.azurecr.io/inferoniq:latestStep 3: Deploy with Terraform
cd deploy/terraform/azure
terraform init
terraform apply \
-var="image_uri=inferoniqcr.azurecr.io/inferoniq:latest" \
-var="db_password=your-secure-password" \
-var="supabase_url=https://your-project.supabase.co" \
-var="supabase_anon=your-anon-key" \
-var="supabase_srv=your-service-role-key" \
-var="anthropic_key=sk-ant-..." \
-var="openai_key=sk-..."
# Outputs:
# app_url = "inferoniq.happysky-abc123.eastus.azurecontainerapps.io"
# db_fqdn = "inferoniq-db.postgres.database.azure.com"
# storage_name = "inferoniqdocs"Step 4: Run migrations
for f in supabase/migrations/*.sql; do
psql "postgresql://inferoniq:$DB_PASSWORD@<db_fqdn>/inferoniq?sslmode=require" -f "$f"
doneResources created
- Resource Group
- Azure Database for PostgreSQL Flexible Server (GP_Standard_D2s_v3)
- Container Apps Environment + Container App (2–10 replicas, auto-scale)
- Storage Account + private Blob Container
- Log Analytics Workspace
Option 5: Google Cloud (Cloud Run)
Fully managed serverless containers with Cloud SQL on GCP.
Step 1: Prerequisites
- GCP project with billing enabled
- Terraform 1.5+
- gcloud CLI configured
Step 2: Push image to GCR
gcloud auth configure-docker
docker tag inferoniq gcr.io/your-project/inferoniq:latest
docker push gcr.io/your-project/inferoniq:latestStep 3: Deploy with Terraform
cd deploy/terraform/gcp
terraform init
terraform apply \
-var="project_id=your-gcp-project" \
-var="image_uri=gcr.io/your-project/inferoniq:latest" \
-var="db_password=your-secure-password" \
-var="supabase_url=https://your-project.supabase.co" \
-var="supabase_anon=your-anon-key" \
-var="supabase_srv=your-service-role-key" \
-var="anthropic_key=sk-ant-..." \
-var="openai_key=sk-..."
# Outputs:
# service_url = "https://inferoniq-abc123-uc.a.run.app"
# db_connection = "your-project:us-central1:inferoniq-db"
# gcs_bucket = "your-project-inferoniq-documents"Step 4: Run migrations
# Use Cloud SQL Proxy for secure access
cloud-sql-proxy your-project:us-central1:inferoniq-db &
for f in supabase/migrations/*.sql; do
psql "postgresql://inferoniq:$DB_PASSWORD@localhost:5432/inferoniq" -f "$f"
doneResources created
- Cloud SQL PostgreSQL 15 (Regional HA, point-in-time recovery)
- Cloud Run service (2–10 instances, health probes)
- GCS bucket (versioned, private)
- VPC Connector for private Cloud SQL access
Database Migrations
inferonIQ ships with 8 migration files that must be applied in order:
| # | File | Description |
|---|---|---|
| 001 | initial_schema.sql | Core tables (80+ tables: customers, users, invoices, contracts, NL2SQL, governance, insights, metering) |
| 002 | storage_buckets.sql | Supabase Storage buckets + RLS policies |
| 003 | functions.sql | Database functions and triggers |
| 004 | fix_missing_tables.sql | Additional relationship tables |
| 005 | enhance_qme.sql | Query Memory Engine: embeddings, FTS vectors, similarity functions |
| 006 | execute_readonly_query.sql | Readonly query execution function (RPC) |
| 007 | dedup_sync_notifications.sql | Document dedup fields, sync tables, notifications, agent configs |
| 008 | production_readiness.sql | Goods receipts, NL2SQL feedback tables, workspace members, query limits |
Applying migrations
# Via Supabase CLI (if using Supabase-hosted DB)
supabase db push
# Via psql (any PostgreSQL)
for f in supabase/migrations/*.sql; do
psql "$DATABASE_URL" -f "$f"
done
# Docker Compose — migrations auto-apply on first bootPost-Deployment Verification
After deployment, verify the application is healthy:
# 1. Health check
curl https://your-domain.com/api/health
# Expected: {"status":"ok","timestamp":"..."}
# 2. Verify API routes
curl https://your-domain.com/api/dashboards?type=executive
# Should return KPI data (or empty if no data seeded)
# 3. Verify database connectivity
curl https://your-domain.com/api/connections
# Should return {"connections":[]} (empty is OK)
# 4. Seed demo data (optional)
# From inside the container or a machine with DB access:
npx tsx scripts/seed-demo-data.tsSSL / TLS Configuration
Kubernetes:Use cert-manager with Let's Encrypt. Set cert-manager.io/cluster-issuer: letsencrypt-prod annotation on the Ingress.
AWS: Use ACM (AWS Certificate Manager) with the ALB. Certificates are free and auto-renew.
Azure/GCP: Container Apps and Cloud Run provide built-in managed TLS. Custom domains can be mapped via their respective consoles.
On-Prem: Place a reverse proxy (nginx, Caddy, or Traefik) in front of Docker Compose. Caddy provides automatic HTTPS:
# Caddyfile
inferoniq.yourcompany.com {
reverse_proxy app:3000
}Monitoring & Logging
| Platform | Recommended Stack |
|---|---|
| AWS | CloudWatch Container Insights + CloudWatch Alarms |
| Azure | Azure Monitor + Log Analytics Workspace |
| GCP | Cloud Monitoring + Cloud Logging |
| Kubernetes | Prometheus + Grafana (via Helm chart annotations) |
| On-Prem | Docker logs + Loki + Grafana, or ELK stack |
All Terraform templates configure health checks against /api/health. Unhealthy instances are automatically restarted.
Troubleshooting
Application returns 500 errors on all API routes
Check that all environment variables are set. The most common cause is a missing SUPABASE_SERVICE_ROLE_KEY. Verify with: curl /api/health
NL2SQL queries return no results
The schema catalog needs to be populated. Run: npx tsx scripts/seed-demo-data.ts to create demo connections and auto-profile them, which populates catalog_assets.
Docker Compose: database not ready
The db container has a healthcheck. If the app starts before db is healthy, it will retry. Check: docker compose logs db
Terraform apply fails with permission errors
Ensure your IAM user/service principal has admin access. For AWS, verify your credentials with: aws sts get-caller-identity
Migrations fail with 'relation already exists'
All migrations use IF NOT EXISTS — this is safe to ignore. If you see actual errors, check that migrations are applied in numeric order (001 before 002, etc.).
Need help?
Check the API Reference or Architecture Guide for additional technical details.