Clarity Platform Architecture chevron_right Testing, Reliability & Releases

Testing, Reliability & Releases

Test infrastructure, test patterns, build pipelines, scalability, release workflows, branching strategy, and failure mode handling across the Clarity platform.

science Test Infrastructure

All integration tests in Clarity derive from the TestBase foundation class provided by Core.Testing. This class provisions an in-memory database and configures the full pipeline context, giving each test complete isolation from external services and other tests.

TestBase Foundation Class

settings

Configure(builder)

Override to register additional services, mock dependencies, or customize the DI container for a specific test class.

conversion_path

GetPipelineContext()

Returns a fully initialized pipeline context with all registered services, ready for executing pipelines in the test environment.

database

GetDisposableDatabaseContext()

Creates an isolated, in-memory database context that is automatically disposed after the test completes. Ensures zero cross-test contamination.

Creating a Test Project

1

Create C# Project

Add a new C# test project inside the plugin directory. Use the NUnit or xUnit project template.

2

Reference Core.Testing

Add a project reference to Core.Testing to gain access to TestBase and all test utilities.

3

Derive from TestBase

Create test classes that inherit from TestBase. Override Configure() if needed.

4

Write [Test] Methods

Write test methods annotated with [Test]. Use the pipeline context and disposable database context.

Example: Ecommerce Integration Test

public class EcommerceTests : TestBase
{
    [Test]
    public async Task Pricing_WithValidData_ReturnsValidPrice()
    {
        // Arrange
        var context = GetPipelineContext();
        await using var db = context.GetDisposableDatabaseContext();

        db.Set<ProductPrice>().Add(new ProductPrice
        {
            /* test data */
        });

        // Act
        var price = await CalculatePricesPipeline.ExecuteAsync(someInput, context);

        // Assert
        Assert.Multiple(() =>
        {
            Assert.That(price, Is.Not.Empty);
        });
    }
}
info

In-memory Database Isolation

Each call to GetDisposableDatabaseContext() provisions a fresh in-memory database. Tests run in parallel without conflicts, and all data is automatically cleaned up when the context is disposed.

info

Test Coverage Strategy

The testing strategy prioritizes pipeline-level integration tests over isolated unit tests. This reflects the platform's architecture: pipelines orchestrate complex multi-step workflows, and testing them end-to-end provides higher confidence than testing individual components in isolation.

Core Payment Workflows: Authorize, capture, refund, void, partial payment, and declined transaction scenarios are tested against the Mock provider, validating the full payment state machine.

Pipeline Engine: Serial execution order, hook chaining, pre-hook short-circuit behavior, and error propagation via ExecuteSafelyAsync.

Connector Integration: Round-trip sync operations validated against sandbox ERP instances, verifying data integrity through the EkDB mapping layer.

pattern Test Patterns

All tests in the Clarity platform follow the Arrange / Act / Assert pattern and use a strict naming convention to ensure readability and traceability across the test suite.

Arrange / Act / Assert

build

Arrange

Set up test data, initialize the pipeline context, create disposable database contexts, and seed any required entities.

play_arrow

Act

Execute the pipeline, service method, or operation under test. Capture the result or any thrown exceptions.

check_circle

Assert

Verify the result matches expectations using NUnit assertions. Use Assert.Multiple for grouped checks.

Test Naming Convention

// Pattern:
{CodeUnderTest}_{Scenario}_{ExpectedOutcome}

Naming Examples

Test Method Name Code Under Test Scenario Expected Outcome
PricingPipeline_WithValidData_ReturnsExpectedPrice PricingPipeline Valid input data Returns expected price
InventoryPipeline_WithNoWarehouse_ThrowsException InventoryPipeline No warehouse configured Throws exception

hub Core Tests

The Core framework includes a comprehensive test suite that validates the foundational subsystems. These tests run on every build and serve as the quality gate for the pipeline engine, data model, expression builder, and contract generation.

conversion_path

SerialPipelineTests

Validates serial pipeline execution order, hook chaining behavior, and default hook registration. Ensures that hooks fire in the correct sequence and that pipelines propagate results correctly.

account_tree

ParallelPipelineTests

Validates parallel pipeline execution, concurrent hook invocation, and result collection from multiple branches. Confirms that parallel results are aggregated correctly.

database

DataModelTests

Validates entity configuration, model builder mappings, schema separation, and relationship configuration. Ensures all data models are correctly mapped for both PostgreSQL and MSSQL.

function

ExpressoTests

Validates the Expresso expression builder, testing dynamic expression construction, predicate building, and filter composition for query operations.

description

ContractTests

Validates API contract generation, ensuring that endpoint contracts are correctly produced, serialized, and consistent across builds.

extension Plugin Tests

Each plugin can maintain its own test project, organized alongside the plugin source code. Plugin tests use the same TestBase foundation and follow the same Arrange/Act/Assert patterns as core tests.

Common Plugin Test Types

token

ModuleTokenTests

Validates that module tokens are correctly generated, resolved, and scoped to the appropriate plugin namespace.

integration_instructions

ModulesIntegrationTests

Full integration tests that exercise plugin pipelines end-to-end with the in-memory database and pipeline context.

api

ModulesApiTests

Validates API endpoint behavior, request/response serialization, and contract adherence for plugin-exposed endpoints.

Building Plugin Tests

# Build with SkipRootOverlay to avoid conflicts with the main solution
dotnet build /p:SkipRootOverlay=true

The /p:SkipRootOverlay=true flag prevents the build from pulling in root-level overlay files, allowing plugin tests to be built in isolation without requiring the full solution to be compiled.

lightbulb

Test Workflow

Plugin tests follow the same pattern: derive from TestBase, obtain a pipeline context, execute a pipeline or service method, then assert on the results. The TestBase class handles all bootstrapping, DI registration, and database provisioning automatically.

build Build Process

The Clarity build pipeline compiles both backend and frontend, packages them into Docker images, and produces deployment artifacts. The pipeline is orchestrated by Azure DevOps and runs on every commit to tracked branches.

Backend Build

dotnet build /p:SkipRootOverlay=true

Build Pipeline Stages

dns

BuildBackend

dotnet build + test

web

BuildFrontend

npm ci + build

inventory_2

CreateArtifact

Docker + deploy.yml

deployed_code

Docker Image Tagging

Every successful build produces Docker images tagged with both the commit hash (for traceability) and latest (for convenience). This allows pinning to a specific build or always pulling the most recent.

cloud_sync

CI Integration

The build pipeline runs on Azure DevOps. It triggers automatically on push to tracked branches, runs all tests, builds Docker images, and publishes deployment artifacts.

scale Scalability

The Clarity platform is designed for horizontal scaling via Kubernetes. Both the backend and frontend are stateless, allowing replicas to be added or removed without coordination. Shared state is externalized to Redis and the database.

Scaling Components

dns

Backend (Stateless HTTP API)

Each backend pod is a stateless ASP.NET Core process. Scale horizontally by increasing the replica count in the Kubernetes deployment. No session affinity required.

web

Frontend (Stateless Remix Server)

The Remix SSR server is stateless. Additional frontend pods handle more concurrent page renders. Static assets can be served from a CDN for further offloading.

memory

Redis (Shared Cache)

Shared cache for multi-replica backends. Runs as a single instance sidecar by default; for production multi-replica deployments, consider an external Redis cluster for high availability.

database

Database (External)

Database scaling is provider-dependent. Azure SQL supports elastic pools for workload-based scaling. PostgreSQL supports read replicas for read-heavy workloads.

Scaling Architecture

language

Load Balancer

Ingress

N Backend Pods

dns

Pod 1

dns

Pod 2

more_horiz

Pod N

N Frontend Pods

web

Pod 1

web

Pod 2

more_horiz

Pod N

memory

Shared Redis

Cache

database

External DB

SQL / PostgreSQL

tune Resource Allocation

Kubernetes resource limits are configured per container in the deployment manifest. These limits define the maximum CPU and memory each container can consume, preventing any single container from starving others on the same node.

Container Memory Limit CPU Limit Notes
dns Backend
1.5Gi 500m Stateless, can scale horizontally
web Frontend
0.75Gi 500m SSR rendering, can scale horizontally
memory Redis
128Mi 100m In-pod sidecar; consider external for multi-replica
warning

Multi-Replica Redis Consideration

When scaling to multiple backend replicas, the in-pod Redis sidecar becomes per-pod rather than shared. For consistent caching across replicas, deploy an external Redis instance or cluster and update the connection string in the deployment secrets.

rocket_launch Release Process

Releases follow a commit-based image tagging strategy. Every merged commit produces a tagged Docker image that can be deployed via the Kubernetes deployment manifest. The deployment strategy is Recreate, replacing all pods at once.

Image Tagging

# Docker images are tagged with the commit hash
image-registry.../backend:{{commit}}
image-registry.../frontend:{{commit}}

# And also tagged as "latest" for convenience
image-registry.../backend:latest
image-registry.../frontend:latest

Deployment Manifest

# deployment.yml (excerpt)
apiVersion: apps/v1
kind: Deployment
spec:
  strategy:
    type: Recreate    # All pods replaced at once
  template:
    spec:
      containers:
        - name: backend
          image: image-registry.../backend:{{commit}}

Release Pipeline

code

Commit

rate_review

PR + Review

merge

Merge

develop

build

Build

Pipeline

deployed_code

Docker Push

hash + latest

inventory_2

Artifact

deploy.yml

rocket_launch

Deploy

to VM

bug_report

Bug Fixes

Bug fixes follow the same pipeline. Hotfix branches are created from the affected version, fixes are applied, and the corrected image flows through the identical build and deploy pipeline.

upgrade

ERP Version Upgrades

ERP version upgrades are handled per-connector. Each connector plugin may need code changes or dependency updates to support a new ERP API version. These changes flow through the standard release pipeline.

account_tree Branching Strategy

The Clarity platform uses a multi-tier branching strategy coordinated across the main repository and its submodules. Branches follow a consistent naming convention and flow through a defined promotion path from feature development to production.

Branch Naming Conventions

Scope Pattern Example Purpose
Project projects/{xyz}/qa projects/acme/qa QA integration branch for a project
projects/{xyz}/hotfix/... projects/acme/hotfix/fix-pricing Urgent fix for a specific project
projects/{xyz}/feature/... projects/acme/feature/new-checkout Feature work scoped to a project
Plugin / Core feature/... feature/add-surcharge-pipeline New feature development
hotfix/... hotfix/payment-timeout Critical fix for a plugin or core

Promotion Flow

edit

feature/*

Development

merge

develop

Integration

science

QA

Testing

verified

main

Production

link

Submodule Branch Coordination

When working on features that span Core and Plugins, the submodule references must be updated to point to the correct branch in each submodule. Submodule pointers are committed as part of the parent repository.

rate_review

PR Workflow

Pull requests are created within each repository (Core, Plugin, or Client). Cross-repo PRs are linked via references in the PR description. All PRs require review before merging to develop.

error Failure Modes

The payment system defines comprehensive status models for tracking payment outcomes through their lifecycle. These statuses enable the platform to handle partial failures, retries, and error recovery at both the aggregate and individual source level.

Payment Status Aggregation

pending

PENDING

Payment initiated, awaiting processing

check_circle

COMPLETED

All sources captured successfully

timelapse

PARTIALLY_COMPLETE

Some sources captured, others pending or failed

error

ERROR

Processing error occurred

Payment Method Source Status Lifecycle

Status Description
Internal Pending Created, not yet sent to provider
Pre-processing Validation in progress
External Pending Sent to provider, awaiting response
Authorized Provider approved, funds held
Partial Authorized Partially approved
Captured Funds transferred
Refunded Payment reversed
Voided Authorization cancelled
Declined Provider rejected
Internal Error Platform error
External Error Provider error

Surcharge Status Lifecycle

Pending Completed PartiallyCompleted Abandoned Assigned Transient Refunded Declined Cancelled Authorized

Failure Handling Strategies

block

Decline Handling

When a payment is declined, the UpdateSurchargeForDeclinedPaymentPipeline is triggered to update surcharge records and notify the appropriate systems.

timelapse

Partial Payments

When only some payment sources succeed, the aggregate status transitions to PartiallyComplete, allowing the system to track which sources still need resolution.

undo

Refund Processing

Refunds are processed via the RefundPaymentPipeline, which reverses the captured funds and updates all related status records.

report

Connector / ERP Errors

Connector and ERP errors are logged with full context and counted for monitoring. Transient errors may be retried automatically depending on the connector configuration.

Payment Status State Machine

pending

Pending

lock

Authorized

payments

Captured

check_circle

Completed

Pending

block

Declined

Authorized

cancel

Voided

Captured

undo

Refunded

help

Frequently Asked Questions