Skip to content

Backends

OJS is backend-agnostic. The specification defines the behavioral contract, and backends implement it. Five official backends are available, all supporting all five conformance levels.

FeatureRedisPostgreSQLNATSKafkaSQS
Storage engineRedis 7.0+PostgreSQL 15+NATS JetStream + KVKafka + RedisAWS SQS + DynamoDB
AtomicityLua scriptsSQL transactionsJetStream ackKafka offsets + Redis LuaSQS visibility timeout
Dequeue strategyZPOPMIN / sorted setsSELECT FOR UPDATE SKIP LOCKEDJetStream pull consumersKafka consumer groupsSQS ReceiveMessage
PersistenceRDB + AOF (configurable)WAL (always durable)JetStream file storeKafka log + RedisSQS (managed) + DynamoDB
Horizontal scalingRedis ClusterRead replicas, CitusNATS clusteringKafka partitionsAuto-scaling (managed)
Real-time notificationsPub/SubLISTEN/NOTIFYNATS subjectsKafka topicsSQS long polling
Conformance levelLevel 4 (Full)Level 4 (Full)Level 4 (Full)Level 4 (Full)Level 4 (Full)
Best forHigh throughput, low latencyDurability, existing Postgres infrastructureLightweight, single-binaryEvent streaming, high durabilityAWS-native, serverless

The Redis backend (ojs-backend-redis) uses Redis as the storage and coordination layer. It achieves atomic multi-key operations through Lua scripts.

  • You need the highest possible throughput (tens of thousands of jobs per second).
  • You already run Redis in your infrastructure.
  • Low latency on enqueue and dequeue is critical.
  • You are comfortable with Redis persistence trade-offs (RDB snapshots can lose recent data on crash; AOF mitigates this).

The Redis backend uses three internal layers:

  • internal/api/ handles HTTP requests using the chi router, validates requests, and returns structured error responses.
  • internal/core/ defines business logic interfaces: the job state machine, retry evaluation, workflow engine, and middleware chains.
  • internal/redis/ implements those interfaces using Redis data structures and Lua scripts for atomicity.

Jobs are stored as Redis hashes. Queues are implemented as sorted sets (priority + FIFO ordering). Scheduled jobs use a separate sorted set keyed by execution time. A scheduler goroutine polls the scheduled set and promotes jobs to their target queue when the time arrives.

docker-compose.yml
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
ojs-server:
image: ghcr.io/openjobspec/ojs-backend-redis:latest
ports:
- "8080:8080"
environment:
REDIS_URL: redis://redis:6379
Terminal window
docker compose up -d
curl http://localhost:8080/ojs/v1/health
Environment variableDefaultDescription
REDIS_URLredis://localhost:6379Redis connection string
PORT8080HTTP server port
Terminal window
cd ojs-backend-redis
make build # Builds to bin/ojs-server
make test # Runs tests with race detector
make lint # Runs go vet
make docker-up # Starts via Docker Compose

The PostgreSQL backend (ojs-backend-postgres) uses PostgreSQL as the storage layer. It achieves non-blocking dequeue using SELECT ... FOR UPDATE SKIP LOCKED and real-time notifications using LISTEN/NOTIFY.

  • Durability is non-negotiable (WAL ensures zero data loss on crash).
  • You already run PostgreSQL and want to avoid adding another infrastructure dependency.
  • You need strong transactional guarantees (enqueue a job inside the same transaction as your business logic).
  • Your throughput requirements are moderate (thousands of jobs per second, not tens of thousands).

The PostgreSQL backend follows the same three-layer structure as the Redis backend:

  • internal/api/ handles HTTP requests using the chi router.
  • internal/core/ defines business logic interfaces.
  • internal/postgres/ implements those interfaces using pgx/v5 and SQL.

Jobs are stored in a jobs table. The dequeue query uses SELECT ... FOR UPDATE SKIP LOCKED, which allows multiple workers to dequeue concurrently without blocking each other. LISTEN/NOTIFY enables workers to receive immediate notification when new jobs are available, reducing poll latency.

docker-compose.yml
services:
postgres:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_DB: ojs
POSTGRES_USER: ojs
POSTGRES_PASSWORD: ojs
ojs-server:
image: ghcr.io/openjobspec/ojs-backend-postgres:latest
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://ojs:ojs@postgres:5432/ojs?sslmode=disable
Terminal window
docker compose up -d
curl http://localhost:8080/ojs/v1/health
Environment variableDefaultDescription
DATABASE_URLpostgres://localhost:5432/ojs?sslmode=disablePostgreSQL connection string
PORT8080HTTP server port
Terminal window
cd ojs-backend-postgres
make build # Builds to bin/ojs-server
make test # Runs tests with race detector
make lint # Runs go vet
make docker-up # Starts via Docker Compose

The NATS backend (ojs-backend-nats) uses NATS JetStream for message delivery and JetStream-backed Key-Value stores for state management. It is a lightweight, single-binary solution with no external dependencies beyond a NATS server.

  • You want a lightweight, operationally simple deployment (single binary + NATS).
  • You need built-in clustering and multi-region support without additional tooling.
  • You prefer a unified messaging and state platform.
  • You want fast startup and low resource footprint.

The NATS backend follows the same three-layer structure:

  • internal/api/ handles HTTP requests using the chi router.
  • internal/core/ defines business logic interfaces.
  • internal/nats/ implements those interfaces using JetStream streams and KV buckets.

Jobs are stored in a JetStream KV bucket (ojs-jobs). Queues use JetStream subjects (ojs.queue.{name}.jobs). Background schedulers handle promotion of scheduled jobs (1s interval), retry promotion (200ms), stalled job reaping (500ms), and cron scheduling (10s).

docker-compose.yml
services:
nats:
image: nats:2-alpine
command: --jetstream
ports:
- "4222:4222"
ojs-server:
image: ghcr.io/openjobspec/ojs-backend-nats:latest
ports:
- "8080:8080"
environment:
NATS_URL: nats://nats:4222
Terminal window
docker compose up -d
curl http://localhost:8080/ojs/v1/health
Environment variableDefaultDescription
NATS_URLnats://localhost:4222NATS connection string
PORT8080HTTP server port
Terminal window
cd ojs-backend-nats
make build # Builds to bin/ojs-server
make test # Runs tests with race detector
make lint # Runs go vet
make docker-up # Starts via Docker Compose

The Kafka backend (ojs-backend-kafka) uses a hybrid architecture: Kafka handles message transport and durability while Redis provides per-job state management. This combination delivers Kafka’s streaming strengths with the random-access state operations that job processing requires.

  • You already run Kafka and want to consolidate on a single messaging platform.
  • You need extremely high throughput (50,000+ jobs/sec per partition).
  • You want durable, replayable job history via Kafka’s log.
  • Event streaming and job processing coexist in your architecture.

The Kafka backend uses a hybrid storage model:

  • Kafka handles enqueue (produce), dequeue (consume), and provides durable ordered storage via topics.
  • Redis manages per-job state, visibility timeouts, unique constraints, workflows, and cron schedules.

Topics follow the pattern ojs.queue.{name} for job queues, ojs.queue.{name}.retry for retry queues, ojs.dead.{name} for dead letters, and ojs.events for lifecycle events.

docker-compose.yml
services:
kafka:
image: confluentinc/cp-kafka:7.6.0
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
CLUSTER_ID: kafka-ojs-cluster-001
redis:
image: redis:7-alpine
ports:
- "6379:6379"
ojs-server:
image: ghcr.io/openjobspec/ojs-backend-kafka:latest
ports:
- "8080:8080"
environment:
KAFKA_BROKERS: kafka:9092
REDIS_URL: redis://redis:6379
Terminal window
docker compose up -d
curl http://localhost:8080/ojs/v1/health
Environment variableDefaultDescription
KAFKA_BROKERSlocalhost:9092Comma-separated Kafka broker addresses
REDIS_URLredis://localhost:6379Redis connection for state management
PORT8080HTTP server port
Terminal window
cd ojs-backend-kafka
make build # Builds to bin/ojs-server
make test # Runs tests with race detector
make lint # Runs go vet
make docker-up # Starts via Docker Compose

The SQS backend (ojs-backend-sqs) uses AWS SQS for message transport and DynamoDB for state management. It is designed for AWS-native deployments and supports local development via LocalStack.

  • You are deploying on AWS and want fully managed infrastructure.
  • You need serverless-compatible job processing (Lambda consumers).
  • You want automatic scaling without capacity planning.
  • You prefer pay-per-use pricing over provisioned infrastructure.

The SQS backend uses a hybrid storage model:

  • SQS handles enqueue (SendMessage), dequeue (ReceiveMessage), visibility timeout management, and dead letter queues.
  • DynamoDB manages job metadata, workflow state, cron schedules, and unique constraints.

SQS queues follow the pattern ojs-{name} with optional FIFO queues (ojs-{name}.fifo) for strict ordering. Terraform modules are provided for AWS infrastructure provisioning.

# docker-compose.yml (LocalStack for local development)
services:
localstack:
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
SERVICES: sqs,dynamodb
ojs-server:
image: ghcr.io/openjobspec/ojs-backend-sqs:latest
ports:
- "8080:8080"
environment:
AWS_REGION: us-east-1
AWS_ENDPOINT_URL: http://localstack:4566
DYNAMODB_TABLE: ojs-jobs
SQS_QUEUE_PREFIX: ojs
Terminal window
docker compose up -d
curl http://localhost:8080/ojs/v1/health
Environment variableDefaultDescription
AWS_REGIONus-east-1AWS region
AWS_ENDPOINT_URLCustom endpoint (LocalStack: http://localhost:4566)
DYNAMODB_TABLEojs-jobsDynamoDB table name
SQS_QUEUE_PREFIXojsSQS queue name prefix
SQS_USE_FIFOfalseUse FIFO queues for ordering guarantees
PORT8080HTTP server port
Terminal window
cd ojs-backend-sqs
make build # Builds to bin/ojs-server
make test # Runs tests with race detector
make lint # Runs go vet
make docker-up # Starts via Docker Compose (LocalStack)

Both official backends support all five conformance levels:

LevelNameFeatures
0CoreEnqueue, fetch, ack, nack, health, queues
1LifecycleJob info, cancel, dead letter, heartbeat
2SchedulingDelayed jobs, cron jobs
3WorkflowsChain, group, batch primitives
4FullBatch enqueue, unique jobs, priority, queue pause/resume, queue stats

Check a server’s conformance level via the manifest:

Terminal window
curl http://localhost:8080/ojs/manifest

OJS is designed to be implementable on any storage engine. See the Implement a Backend guide for a step-by-step walkthrough. The conformance test suite validates your implementation against all five levels.