Clarity Platform Architecture chevron_right Platform & Infrastructure

Platform & Infrastructure

Repository layout, plugin architecture, deployment model, hosting stack, routing, configuration, and database support.

folder_open Project Structure

The Clarity Payment Hub repository follows a modular, three-tier architecture. Each directory serves a distinct role, with Git submodules connecting independent codebases into a single deployable solution.

folder / Repository root
folder_special /Core Submodule: Phoenix.Core — foundation (pipelines, auth, caching, scheduler, notifications)
folder_special /Plugins Submodule folder containing all plugin references
folder /Client Application entry point, client-specific customizations, Startup.cs, Migrations
folder /RemixUI Frontend solution (Remix v2, React, TypeScript)
link /app/plugins Symlinks to plugin frontend code
folder /Kubernetes Hosting scripts and templates
description Clarity.sln Main backend solution file
folder /.vscode VS Code helper scripts and settings
folder /.config dotnet CLI tool configuration

Three-tier Architecture

terminal

Client

Entry point. Registers Core and all Plugins. Holds migrations and client-specific overrides.

hub

Core

Foundation framework. Provides pipelines, IPlugin, auth, settings, CRUD, workflows, scheduling, and notifications.

extension

Plugins

Add functionality. Each plugin is an independent project that implements IPlugin and registers its own services, models, and routes.

The dependency graph is strictly one-directional: Client depends on Core and Plugins; Plugins depend on Core; Core depends on nothing above it.

extension Plugin Registration

Plugins are registered in Client/Startup.cs using the fluent AddPhoenixPlugin<T>() extension method. Each call wires the plugin into the application builder, and the final .Build() call materializes the pipeline.

Startup.cs Registration Chain

// Client/Startup.cs (excerpt)
var app = builder
    .AddPhoenixPlugin<CMSPlugin>()
    .AddPhoenixPlugin<SitePlugin>()
    .AddPhoenixPlugin<CurrenciesPlugin>()
    .AddPhoenixPlugin<SalesCollectionsPlugin>()
    .AddPhoenixPlugin<PaymentsPlugin>()
    .AddPhoenixPlugin<MockPaymentsProviderPlugin>()
    // .AddPhoenixPlugin<NuveiPaymentsProviderPlugin>()
    .AddPhoenixPlugin<InvoicingPlugin>()
    .AddPhoenixPlugin<ProductsPlugin>()
    .AddPhoenixPlugin<ConnectCorePlugin>()
    .AddPhoenixPlugin<NetSuitePlugin>()
    .AddPhoenixPlugin<SysproPlugin>()
    .AddPhoenixPlugin<EpicorEaglePlugin>()
    .AddPhoenixPlugin<EpicorEclipsePlugin>()
    .AddPhoenixPlugin<InforSytelinePlugin>()
    .AddPhoenixPlugin<SapB1Plugin>()
    .AddPhoenixPlugin<ClientPlugin>()
    .Build();

IPlugin Interface Contract

// Core/Phoenix.Core/Plugins/IPlugin.cs
public interface IPlugin
{
    string Name { get; }
    string Schema { get; }
    void RegisterServices(WebApplicationBuilder builder);
    void OnStartup(WebApplication app);
    void OnModelCreating(ModelBuilder builder);
    virtual Type[] GetAdditionalDatabaseTypes() => Array.Empty<Type>();
}

Plugin Loading Lifecycle

1

RegisterServices

DI registration. Each plugin adds its services, repositories, and handlers to the container.

2

OnModelCreating

EF Core model configuration. Each plugin defines its entity mappings and schema.

3

Build

Application is materialized. The service provider and middleware pipeline are finalized.

4

OnStartup

Post-build initialization. Plugins run seed data, start background tasks, and configure middleware.

Plugins are loaded in the exact order declared in Startup.cs. The ClientPlugin is registered last to allow client-specific overrides of any previously-registered services. Each plugin receives its own EF Core schema, keeping database objects cleanly separated.

cloud_upload Deployment Model

Deployment Units

dns

Backend (.NET)

Client + Core + Plugins compiled into a single process. Runs as ASP.NET Core on Kestrel.

web

Frontend (Node/Remix)

RemixUI application. Serves the UI and proxies API requests to the backend container.

database

Database (SQL)

External database. Connection string is injected per environment via Kubernetes secrets.

memory

Optional (Redis, Elasticsearch)

Redis runs as a sidecar container. Elasticsearch available for search capabilities. Both provisioned via docker-compose locally.

Deployment Architecture

language

Ingress

HTTPRoute

Gateway

Kubernetes Pod

dns

Backend

.NET :8080

web

Frontend

Node :3000

memory

Redis

Sidecar

database

External DB

hard_drive

NFS Storage

Kubernetes Deployment Excerpt

# Kubernetes/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{acronym}}-{{env}}
  namespace: {{acronym}}-{{env}}
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: redis
        image: "redis"
      - name: backend
        image: image-registry.hq.clarityinternal.com/backend:{{commit}}
        env:
        - name: "ConnectionStrings__Clarity"
          valueFrom:
            secretKeyRef:
              name: db
              key: connectionString
        - name: "ASPNETCORE_ENVIRONMENT"
          value: "Production"
      - name: frontend
        image: image-registry.hq.clarityinternal.com/frontend:{{commit}}
        ports:
        - containerPort: 3000
        env:
        - name: "API_URL"
          value: "http://localhost:8080"

One namespace per project/environment (e.g. {{acronym}}-{{env}}). Backend and frontend containers share a pod. Redis runs as a sidecar. NFS PVC is mounted for file storage.

info

Multi-Tenant Isolation

Each client deployment uses a dedicated database instance, providing complete data isolation at the database level. There is no shared database or row-level tenancy — each tenant's data is physically separated. Connection strings, credentials, and configuration are managed per-tenant. The Client project layer holds all customer-specific overrides, pipeline hooks, and configuration, ensuring that core and plugin updates can be applied independently without affecting client customizations.

rocket_launch Build Pipeline

The CI/CD pipeline (Azure DevOps or similar) builds, containerizes, and packages the application through three sequential stages.

Pipeline Stages

build

BuildBackend

Docker build + push

backend.dockerfile

code

BuildFrontend

Docker build + push

frontend.dockerfile

package_2

CreateDeployment

Token replacement

deployment.yml

chevron_right

BuildBackend

Builds the .NET solution using Kubernetes/backend.dockerfile. Pushes the image to the internal registry tagged with the commit hash and latest.

chevron_right

BuildFrontend

Builds the Remix application using Kubernetes/frontend.dockerfile. Same tagging strategy as backend.

chevron_right

CreateDeploymentArtifact

Performs token replacement on deployment.yml — substituting {{acronym}}, {{env}}, {{commit}}, {{subdomain}}, and {{domain}} — then publishes the artifact for deployment.

Docker Build & Push

# Build and push backend image
docker build -t image-registry.hq.clarityinternal.com/backend:$(Build.SourceVersion) \
             -f Kubernetes/backend.dockerfile .
docker push image-registry.hq.clarityinternal.com/backend:$(Build.SourceVersion)

# Build and push frontend image
docker build -t image-registry.hq.clarityinternal.com/frontend:$(Build.SourceVersion) \
             -f Kubernetes/frontend.dockerfile .
docker push image-registry.hq.clarityinternal.com/frontend:$(Build.SourceVersion)

# Both images are also tagged with 'latest'

cloud Hosting Stack

The platform is deployed on Kubernetes and is cloud-agnostic. No cloud-specific SDK or provider is hardcoded into the codebase.

sailing

Orchestration

Kubernetes (Deployment, Service, HTTPRoute, Namespace, PVC). Each project/environment gets its own namespace.

inventory

Container Registry

image-registry.hq.clarityinternal.com — images tagged by commit hash and latest.

hard_drive

Storage

NFS with storageClassName: nfs-client, ReadWriteMany access mode, mounted at /usr/share/phx for backend file operations.

database

Database

External database. Connection string injected from Kubernetes secret db.connectionString. Provider is determined by the connection string format.

router

Gateway

Kubernetes gateway.networking.k8s.io/v1 HTTPRoute with hostname template {{subdomain}}.{{domain}}.com.

alt_route Routing

The Kubernetes HTTPRoute defines how incoming traffic is distributed between the backend and frontend services. URL rewriting is applied to API routes so the backend receives clean paths.

Path Match Backend Service Filters
/auth/microsoft/register/callback nest-prd-backend None
/auth/microsoft-callback {{acronym}}-{{env}}-backend None
/api/* {{acronym}}-{{env}}-backend URL rewrite (replacePrefixMatch: /)
/ (default) {{acronym}}-{{env}}-frontend None

The /api prefix is stripped before forwarding to the backend service, so backend controllers do not need to include /api in their route templates. Auth callback routes are handled directly by the backend without rewriting.

settings Configuration & Secrets

Configuration follows ASP.NET Core conventions with environment-specific overrides. Secrets are injected at runtime via Kubernetes, keeping sensitive values out of the codebase.

Kubernetes Environment Variables

Variable Source Purpose
ConnectionStrings__Clarity K8s Secret (db.connectionString) Database connection string
AuthSettings__JwtKey K8s Secret / ConfigMap JWT signing key
AuthSettings__JwtIssuer K8s Secret / ConfigMap JWT token issuer URL
ASPNETCORE_ENVIRONMENT Deployment manifest Runtime environment (Production, Staging)
API_URL Deployment manifest Backend URL for frontend proxy (http://localhost:8080)

Configuration Hierarchy

ASP.NET Core merges configuration sources in order, with later sources overriding earlier ones:

// Configuration loading order (later overrides earlier)
1. appsettings.json                    // Base configuration
2. appsettings.{Environment}.json       // Environment-specific overrides
3. Environment variables                 // Kubernetes-injected values
4. Kubernetes Secrets                    // Sensitive data (connection strings, JWT keys)

The double-underscore convention (ConnectionStrings__Clarity) maps to the nested JSON path ConnectionStrings:Clarity in appsettings.

computer Local Development

Local development uses Docker Compose for infrastructure services and standard .NET / Node tooling for the application code.

1

Start Infrastructure Services

Spin up Redis and Elasticsearch using Docker Compose.

docker-compose up -d
2

Configure Backend

Copy the example settings file and configure your local connection string.

cp appsettings.Development.example.json appsettings.Development.json
# Edit appsettings.Development.json — set ConnectionStrings:Clarity
3

Set Connection String

Add your local database connection string to the settings file. Both PostgreSQL and MSSQL formats are supported.

// appsettings.Development.json
{
  "ConnectionStrings": {
    "Clarity": "Server=localhost;Database=Clarity;User Id=sa;Password=..."
  }
}
4

Run Backend

Start the .NET backend from the Client project.

dotnet run --project Client
5

Run Frontend

Start the Remix development server from the RemixUI directory.

cd RemixUI
npm run dev

tune Resource Limits

Kubernetes resource requests and limits are defined per container in the deployment manifest. These values govern scheduling and OOM kill thresholds.

Container Memory Limit CPU Limit Notes
dns Backend
1.5Gi 500m .NET runtime, EF Core, plugin services
web Frontend
0.75Gi 500m Node.js Remix SSR
memory Redis
128Mi 100m In-memory cache sidecar

Total pod footprint is approximately 2.375Gi memory and 1100m CPU. The deployment uses a single replica by default.

storage PostgreSQL & MSSQL Support

The Clarity Platform supports both PostgreSQL and Microsoft SQL Server through Entity Framework Core's provider-agnostic abstraction. The active database provider is determined at runtime based on the connection string format.

EF Core Provider-Agnostic Architecture

database

PostgreSQL

Open-source, preferred for new deployments.

Host=localhost;
Port=5432;
Database=Clarity;
Username=postgres;
Password=***
database

Microsoft SQL Server

Supported for existing enterprise installations.

Server=localhost;
Database=Clarity;
User Id=sa;
Password=***;
TrustServerCertificate=True

Runtime Provider Detection

The platform inspects the connection string format at startup and configures the appropriate EF Core provider. No code changes are required to switch databases.

// Simplified provider selection logic
if (connectionString.Contains("Host="))
{
    // PostgreSQL detected
    options.UseNpgsql(connectionString);
}
else
{
    // SQL Server (default)
    options.UseSqlServer(connectionString);
}

Migration Considerations

chevron_right

Schema Management

Each plugin owns its own database schema (e.g., payments, invoicing). Migrations are generated from the Client project which references all plugins.

chevron_right

Provider-Specific SQL

EF Core handles most differences automatically. Provider-specific SQL (e.g., JSON column types, full-text search) should use conditional compilation or provider checks.

chevron_right

Testing Across Providers

Both database providers are fully supported in all environments. Integration tests should be run against both providers to ensure compatibility.

help

Frequently Asked Questions