Nenjo Docs
Getting Started

Architecture

Three-service architecture, infrastructure dependencies, and data flow in the Nenjo platform.

Overview

Nenjo is a three-service system. Each service is independently deployable and communicates through a combination of HTTP APIs, WebSockets, and message queues.

Frontend (Next.js)  <-->  Backend (Rust/Axum)  <-->  Worker (Rust)
                              |
                    PostgreSQL / Redis / NATS / S3

Services

Backend (Rust / Axum)

The backend is the central API server and data layer. It is built with Axum and organized as a Cargo workspace with two crates:

  • crates/api-server -- HTTP routes, WebSocket relay, authentication middleware, and the MCP server endpoint.
  • crates/services -- Business logic, database queries (compile-time checked with sqlx), and credential encryption.

The backend owns all persistent state. It exposes a REST API consumed by both the frontend and the worker, and relays real-time events between them over WebSocket connections backed by Redis pub/sub.

Frontend (Next.js)

The frontend is a Next.js application that provides the dashboard, chat interface, pipeline editor, and settings pages. It authenticates users via Clerk JWT tokens and communicates exclusively with the backend API.

See Authentication for details on the auth flow.

Worker (Rust)

The worker is the execution engine. It runs LLM-backed agents, executes pipelines, and manages tool invocations. The worker connects to the backend via WebSocket to receive events (chat messages, ticket executions, bootstrap updates) and calls the backend API to report results.

The worker holds no persistent state of its own -- it bootstraps from the backend on startup and caches data locally under ~/.nenjo/data/. See Worker and Bootstrap and Hot-Swap for details.

Infrastructure

PostgreSQL 16

Primary data store for all platform resources: projects, roles, agents, pipelines, tickets, executions, skills, modes, councils, MCP server configs, and API keys. All queries are compile-time checked with sqlx.

Redis

Used for two purposes:

  1. Pub/sub event relay -- The backend publishes events (chat messages, ticket executions) to Redis channels. The WebSocket layer subscribes and forwards them to the connected worker.
  2. Session and cache storage -- Short-lived data like active mode sessions and rate-limit counters.

NATS JetStream

Durable message queue for backend-to-worker communication that requires delivery guarantees. Used for events where at-least-once delivery matters, such as pipeline execution triggers and bootstrap change notifications.

S3-compatible storage (Garage)

Object storage for project documents, skill manifests, and large artifacts. The backend writes objects and generates pre-signed URLs; the worker fetches documents during bootstrap sync.

Communication flow

Chat message

  1. User sends a message in the frontend chat UI.
  2. Frontend posts to the backend REST API.
  3. Backend publishes a chat.message event to Redis pub/sub.
  4. The WebSocket relay forwards the event to the connected worker.
  5. Worker builds an agent (resolving role, tools, MCP, memory), runs the LLM loop, and streams token events back over the WebSocket.
  6. Backend relays the stream to the frontend in real time.

Ticket execution

  1. User triggers a ticket execution from the frontend (or a cron schedule fires).
  2. Backend creates an execution run record in PostgreSQL and publishes a ticket.execute event.
  3. Worker receives the event, resolves the pipeline DAG, and executes steps sequentially (agent, gate, council, lambda).
  4. Each step's status and output are reported back to the backend API, which persists them and streams progress to the frontend.

Configuration changes

  1. User modifies a role, pipeline, agent, or other resource in the frontend.
  2. Backend persists the change to PostgreSQL and publishes a bootstrap.changed event.
  3. Worker re-fetches the full bootstrap snapshot from GET /api/v1/agents/bootstrap, atomically swaps its in-memory data, and persists the new cache to disk.

See Bootstrap and Hot-Swap for the full reconciliation process.

Data flow

PostgreSQL ──(bootstrap API)──> Worker cache (~/.nenjo/data/*.json)

                                      ├── RouterContext (in-memory snapshot)
                                      │       ├── BootstrapSnapshot
                                      │       ├── ProviderRegistry
                                      │       ├── AgentMemory (SQLite)
                                      │       └── SecurityPolicy

                                      └── ExecutorResources (per pipeline run)
                                              ├── agents, roles, pipelines, councils
                                              ├── skills, modes, MCP servers
                                              └── provider registry + memory + security

The worker never queries PostgreSQL directly. All data flows through the backend API, either at bootstrap time or via the hot-swap mechanism. This keeps the worker stateless and allows it to run in a separate network zone from the database.

On this page