Pipelines & Hooks
The core extensibility model in Phoenix
Pipeline Overview
Core Concept
Pipelines are the single most important architectural concept in Phoenix. They are the primary unit of work, the primary extension point, and the mechanism through which virtually all application logic flows. Understanding pipelines is prerequisite to understanding everything else.
Pipelines provide a clean, composable model for extending and replacing behaviors without conflicts.
Every pipeline defines strict input and output types. A pipeline takes a well-defined input, processes it through a default implementation and any registered hooks, and produces a well-defined output. This type safety ensures that all participants in a pipeline chain are working with compatible data.
Almost all controllers and API endpoints in Phoenix delegate their work to a specific pipeline. When a request arrives, the controller extracts the relevant input, calls the appropriate pipeline, and returns the result. This means that to change the behavior of any endpoint, you simply hook into its pipeline rather than modifying controller code.
Pipelines can call other pipelines, and this pattern is actively encouraged. By composing pipelines, you build extensible chains of behavior where each step can be independently hooked, replaced, or augmented by any plugin in the system.
Pipeline Execution Flow
Each hook receives the original input and the previous hook's result, producing a new result passed to the next hook.
Serial Pipelines
Serial pipelines are the default and most common pipeline type. In a serial pipeline, the default logic runs first, and then each registered hook runs one after another in sequence. Each hook receives the original input along with the result from the previous step, allowing it to transform, augment, or completely replace the result before passing it along.
To create a serial pipeline, your class derives from SerialPipeline<TSelf, TInput, TOutput>. The three type parameters are:
| Parameter | Description |
|---|---|
| TSelf | The pipeline class itself (enables the static generic pattern) |
| TInput | The type of data the pipeline accepts as input |
| TOutput | The type of data the pipeline returns as its result |
Creating a Serial Pipeline
Override ExecuteDefaultAsync to provide the pipeline's default behavior. This is the logic that runs when no default hook has replaced it.
public class MyCustomPipeline : SerialPipeline<MyCustomPipeline, int, SomeModel>
{
public override ValueTask<SomeModel> ExecuteDefaultAsync(
int input, IPipelineContext context, CancellationToken token = default)
{
// Your default logic goes here.
// This runs first, before any hooks.
var result = new SomeModel { Id = input, Name = "Default" };
return new ValueTask<SomeModel>(result);
}
}
Executing a Pipeline
Pipelines expose static methods for execution. You do not need to instantiate the pipeline class yourself — the framework handles resolution and hook ordering.
// Standard execution — throws on failure
var result = await MyCustomPipeline.ExecuteAsync(1, context);
// Safe execution — returns success flag instead of throwing
var (IsSuccess, Result) = await MyCustomPipeline.ExecuteSafelyAsync(1, context);
When to use ExecuteSafelyAsync
Use ExecuteSafelyAsync when a pipeline failure is an expected scenario and you want to handle it gracefully without propagating an exception. The returned tuple gives you an IsSuccess boolean and the Result value, making conditional handling straightforward.
Hooks (Declarative)
Hooks are the most common way to customize pipeline behavior. A hook is a class that runs after the pipeline's default logic (or after the previous hook in the chain). Each hook receives two key pieces of data:
- The original input that was passed to the pipeline.
- The previous result — either the output from the default logic or the output from the hook that ran before this one.
This design means hooks form a chain. Each hook can inspect the previous result, modify it, enrich it, or replace it entirely before passing it along. Multiple plugins can each register their own hook on the same pipeline, and they will all execute in order without conflicting.
public class MyCustomHook : MyCustomPipeline.Hook
{
public override ValueTask<SomeModel> ExecuteAsync(
int input,
SomeModel previousResult,
IPipelineContext context,
CancellationToken token = default)
{
// Modify or replace the previous result
previousResult.Name = "Modified by MyCustomHook";
return new ValueTask<SomeModel>(previousResult);
}
}
The declarative approach (deriving from Pipeline.Hook) is the recommended pattern. Because hooks are standalone classes, they are easy to find by searching the codebase, easy to unit test in isolation, and easy to understand when reading the code.
Hook ordering follows plugin registration order. If Plugin A registers a hook before Plugin B, Plugin A's hook will run first and Plugin B's hook will receive Plugin A's result as its previousResult.
Hook Chaining
Logic
Hook
Hook
Output
Default Hooks
A Default Hook completely replaces the pipeline's default implementation. Instead of augmenting or modifying the result after the default logic runs, a Default Hook becomes the new default logic. The original ExecuteDefaultAsync is bypassed entirely.
This is a powerful mechanism for scenarios where the built-in behavior is not suitable and you need to provide an entirely different implementation. For example, if the core platform calculates tax using a simple flat-rate model, a tax plugin could register a Default Hook to replace that with integration to a third-party tax service.
public class MyReplacementLogic : MyCustomPipeline.DefaultHook
{
public override ValueTask<SomeModel> ExecuteAsync(
int input,
IPipelineContext context,
CancellationToken token = default)
{
// This REPLACES the original ExecuteDefaultAsync entirely.
// The pipeline's built-in logic will NOT run.
var result = new SomeModel
{
Id = input,
Name = "Completely replaced by plugin"
};
return new ValueTask<SomeModel>(result);
}
}
When to use Default Hooks vs Regular Hooks
- Regular Hook: You want to modify, enrich, or extend the result after the default logic runs. The default logic is still valuable.
- Default Hook: You want to completely replace the default logic with your own implementation. The built-in behavior is not needed at all.
Note: Only one Default Hook can be active per pipeline. If multiple plugins register Default Hooks on the same pipeline, the last one registered wins.
Hooks (Imperative)
Instead of creating a standalone hook class, you can register hooks imperatively inside your plugin's OnStartup method. The framework provides two methods for this:
| Method | Purpose |
|---|---|
| AppendHook() | Adds a hook to the end of the hook chain (runs after all other hooks) |
| ReplaceDefaultHook() | Replaces the default implementation (equivalent to a DefaultHook class) |
public class MyPlugin : PhoenixPlugin
{
public override void OnStartup(IPluginContext context)
{
// Append a hook using a lambda
MyCustomPipeline.AppendHook(
async (input, previousResult, ctx, token) =>
{
previousResult.Name += " (enhanced by plugin)";
return previousResult;
});
// Replace the default implementation
MyCustomPipeline.ReplaceDefaultHook(
async (input, ctx, token) =>
{
return new SomeModel
{
Id = input,
Name = "Fully replaced default"
};
});
}
}
Class-Based Hooks Are Preferred
While imperative hooks work perfectly well, class-based (declarative) hooks are preferred in most scenarios. The reasons are practical:
- Class-based hooks are easier to search for in the codebase (search for class name or "Pipeline.Hook")
- They are easier to unit test because they have a well-defined class contract
- They are self-documenting — the class name describes what the hook does
Use imperative hooks for quick prototyping or truly trivial one-liner modifications.
Pre Hooks
Pre Hooks run before the pipeline's default logic executes. They are a gate that can inspect the input and decide one of three things:
- Halt — Stop the pipeline entirely and return a custom result. The default logic and all post-hooks are skipped.
- Proceed — Allow the pipeline to continue as normal with the original, unchanged input.
- ProceedWithInput — Allow the pipeline to continue, but with a modified input value.
Pre Hooks are ideal for validation, authorization checks, input sanitization, or short-circuiting when a cached or pre-computed result is available.
public class ValidateInputPreHook : TestPipeline.PreHook
{
public override ValueTask<PreHookResult<int, SomeModel>> ExecuteAsync(
int input,
IPipelineContext context,
CancellationToken token = default)
{
// Option 1: Halt — stop pipeline, return custom result
if (input < 0)
{
return Halt(new SomeModel { Id = -1, Name = "Invalid input" });
}
// Option 2: ProceedWithInput — continue with modified input
if (input == 0)
{
return ProceedWithInput(1); // default to 1 instead of 0
}
// Option 3: Proceed — continue as normal, no changes
return Proceed();
}
}
Pre-Hook Decision Tree
Return Custom Result
Pipeline skipped entirely
Execute Pipeline
No changes to input
Modified Input
Then execute pipeline
Input Alterations
Input Alterations are a specialized type of Pre Hook that focus specifically on transforming the pipeline's input before it reaches the default logic. Unlike a full Pre Hook, an Input Alteration cannot halt the pipeline — it can only modify the input and let the pipeline continue.
This makes Input Alterations ideal for scenarios like pre-processing, data enrichment, or performing database lookups to supplement the input data before the main pipeline logic runs.
public class EnrichOrderInput : DoSomethingPipeline.InputAlteration
{
public override ValueTask<OrderInput> ExecuteAsync(
OrderInput input,
IPipelineContext context,
CancellationToken token = default)
{
// Perform a lookup or add supplemental data
// before the pipeline's default logic runs
input.TaxRate = GetCurrentTaxRate(input.Region);
input.DiscountCode = NormalizeDiscountCode(input.DiscountCode);
return new ValueTask<OrderInput>(input);
}
}
Input Alteration vs Pre Hook
An Input Alteration is simpler than a Pre Hook because it can only modify the input. If you need the ability to halt the pipeline or return a custom result, use a Pre Hook instead. Input Alterations are best when you always want the pipeline to run — you just want to ensure the input is complete and correct first.
Parallel Pipelines
In a Parallel Pipeline, all hooks run simultaneously rather than sequentially. Each hook operates independently, and the pipeline returns a collection of all results rather than a single chained result.
The classic use case is shipping rate calculation: when a customer views shipping options, you need to query UPS, FedEx, USPS, and possibly other carriers. There is no reason to wait for UPS to respond before asking FedEx — all queries can run in parallel, and the results are collected into a list of available shipping rates.
public class GetShippingRatesPipeline
: ParallelPipeline<GetShippingRatesPipeline, ShippingRequest, ShippingRate>
{
public override ValueTask<ShippingRate> ExecuteDefaultAsync(
ShippingRequest input, IPipelineContext context,
CancellationToken token = default)
{
// Default/fallback rate (e.g., flat rate shipping)
return new ValueTask<ShippingRate>(
new ShippingRate { Carrier = "Standard", Cost = 9.99m });
}
}
// Execution returns a collection of all results
IEnumerable<ShippingRate> rates = await
GetShippingRatesPipeline.ExecuteAsync(request, context);
Serial vs Parallel Comparison
Serial Pipeline
Sequential execution — each hook runs after the previous one completes
Chained results — each hook receives the previous hook's output
Single output — returns one final transformed result
Default choice when hooks depend on each other's results
Parallel Pipeline
Simultaneous execution — all hooks run at the same time
Independent results — each hook produces its own output
Collection output — returns an IEnumerable of all results
Best choice when hooks are independent (e.g., multi-provider queries)
When in doubt, use Serial
If you are unsure whether to use a Serial or Parallel pipeline, default to Serial. Serial pipelines are the standard pattern and work correctly in the vast majority of cases. Only use Parallel pipelines when you have a clear use case where hooks are truly independent and would benefit from concurrent execution.
Pipeline Context
IPipelineContext is the gateway to all state and resources available during pipeline execution. Every pipeline method receives a context parameter, giving hooks and default logic access to authentication state, request information, database connections, caching infrastructure, and dependency injection services.
The context is designed to be a single, consistent entry point so that pipeline code never needs to reach outside the pipeline framework for common resources.
Auth State
User ID, Customer ID
Roles & Permissions
Request State
Headers, Cookies
Endpoint Path
Database
EF Core DbContext
Query & Write
IPipelineContext
Cache
Distributed Cache
Scope Cache
Scheduler
Background Jobs
Deferred Work
DI Services
Any Registered
Service via DI
Accessing the Context
Inside a pipeline or hook, the context is always available as a method parameter. Outside of pipelines (for example, in a controller that needs to create a context to call a pipeline), you can obtain an IPipelineContext through dependency injection or the [FromServices] attribute.
// Option 1: Dependency Injection (constructor)
public class MyController : Controller
{
private readonly IPipelineContext _context;
public MyController(IPipelineContext context)
{
_context = context;
}
}
// Option 2: [FromServices] attribute
public async Task<IActionResult> GetItem(
int id,
[FromServices] IPipelineContext context)
{
var result = await GetItemPipeline.ExecuteAsync(id, context);
return Ok(result);
}
// Inside a pipeline/hook, context is always a parameter
public override ValueTask<SomeModel> ExecuteDefaultAsync(
int input, IPipelineContext context, CancellationToken token)
{
// Access current user
var userId = context.Auth.CurrentUserId;
// Access request headers
var authHeader = context.Request.Headers["Authorization"];
// Access database
var db = context.Database;
// Access a DI service
var myService = context.GetService<IMyService>();
// ...
}
| Property / Method | Provides Access To |
|---|---|
| context.Auth | Current User ID, Customer ID, role and permission checks |
| context.Request | HTTP headers, cookies, endpoint path, query parameters |
| context.Database | Entity Framework Core DbContext for queries and writes |
| context.Cache | IDistributedCache for manual cache operations |
| context.Scheduler | Background job scheduler for deferred/recurring tasks |
| context.GetService<T>() | Any service registered in the DI container |
Pipeline Caching
Phoenix provides multiple caching strategies for pipelines, ranging from simple attribute-based distributed caching to manual cache control and request-scoped memory caching.
Distributed Caching with [DistributedCached]
The simplest way to cache a pipeline's output is to apply the [DistributedCached] attribute to the pipeline class. By default, this caches the result for 10 minutes. The cache key is automatically derived from the pipeline type and input value.
// Default: 10-minute cache
[DistributedCached]
public class GetProductPipeline
: SerialPipeline<GetProductPipeline, int, Product>
{
// ...
}
// Custom duration: 30-minute cache
[DistributedCached(CacheDuration = 30)]
public class GetCategoryTreePipeline
: SerialPipeline<GetCategoryTreePipeline, string, CategoryTree>
{
// ...
}
// Per-user cache: separate cache entry for each user
[DistributedCached(VaryByUser = true)]
public class GetUserDashboardPipeline
: SerialPipeline<GetUserDashboardPipeline, int, Dashboard>
{
// ...
}
| Property | Default | Description |
|---|---|---|
| CacheDuration | 10 (minutes) | How long the cached result remains valid |
| VaryByUser | false | If true, each user gets their own cache entry (keyed by User ID) |
Manual Caching via IPipelineContext
For more control over cache behavior, you can access the IDistributedCache directly through the pipeline context. This is useful when you need to cache intermediate values, use custom keys, or implement conditional caching logic.
The context also provides a convenient Promiser-style method, ResolveAsync, which checks the cache for a given key and only executes the factory function if the key is not found.
// Direct IDistributedCache access
var cache = context.Cache;
var cachedValue = await cache.GetStringAsync("my-custom-key");
// Promiser-style: resolve from cache or compute
var product = await context.ResolveAsync(
"Product_" + productId,
async () =>
{
// This lambda only runs if the cache key is missing
return await FetchProductFromDatabase(productId);
});
Scope Caching with [ScopeCached]
[ScopeCached] is a memory-only cache that lives for the duration of the current request or task scope. Unlike distributed caching, the data is never serialized or sent to an external cache store — it exists purely in the application's memory and is discarded when the scope ends.
This is an advanced and relatively rare pattern, useful when a pipeline is called multiple times within a single request and you want to avoid redundant computation without the overhead of distributed cache serialization.
Choosing the Right Cache Strategy
- [DistributedCached] — Best for data that is expensive to compute and shared across requests/users. Survives application restarts.
- context.ResolveAsync — Best for manual control over cache keys, conditional logic, or caching intermediate pipeline values.
- [ScopeCached] — Best for avoiding redundant computation within a single request. Does not persist beyond the request.
Finding Pipelines
When you need to find existing pipelines in the codebase, there are a few reliable strategies:
Search by Class Name
Search the codebase for classes or file names containing Pipeline. By convention, every pipeline class should include "Pipeline" in its name (e.g., GetProductPipeline, CalculateTaxPipeline).
Trace from the Endpoint
If you know which API endpoint or controller you want to customize, open its handler and look for the pipeline it calls. Controllers almost always delegate to a pipeline via PipelineName.ExecuteAsync(...).
Search for Base Classes
Search for : SerialPipeline or : ParallelPipeline to find all pipeline definitions in the solution. This is the most comprehensive approach.
Naming Convention
Every pipeline class in the Phoenix codebase should contain "Pipeline" in its name. This is a project-wide convention that ensures pipelines are always discoverable through simple text search. When creating new pipelines, always follow this convention.