Cookbook

This page contains five self-contained recipes that cover the most common SPC analysis scenarios. Each recipe describes its input data shape, provides working code samples in C#, Visual Basic, and F#, explains what the results mean, and highlights the single most common mistake practitioners make in that scenario.

These recipes assume the reader has already read Getting Started with Numerics.NET SPC. For guidance on selecting the right chart type, see Choosing the Right Analysis.

Recipe 1: Individuals Process Analysis (I-MR)

Scenario. A chemical batch process produces one measurement per batch: the yield (%) of a distillation step. Batches arrive in time order and cannot be grouped into rational subgroups. The goal is to determine whether the yield process is in statistical control and, if so, to estimate process capability against a specification of 90 %–105 %.

Input shape. A single IReadOnlyList<double> of n observations in time order. Missing values should be omitted before passing to the API; the library treats NaN as a contract violation in individuals data.

C#
// Recipe: Individuals process analysis (I-MR)
double[] measurements = {
    23.1, 24.0, 22.8, 24.5, 23.3, 24.8, 22.5, 24.2,
    23.7, 24.1, 23.4, 24.3, 23.6, 24.0, 23.2
};

ProcessAnalysisResult<IndividualsMovingRangeChartSetData> result =
    ProcessAnalysis.Analyze(
        Vector.Create(measurements),
        specifications: new SpecificationLimits(Lower: 21.0, Upper: 27.0),
        ruleSet: ControlRuleSets.NelsonStandard);

IndividualsChartData iChart = result.ChartData.Individuals;
Console.WriteLine(
    $"I chart: CL={iChart.CenterLine:F2} " +
    $"UCL={iChart.UpperControlLimit:F2} LCL={iChart.LowerControlLimit:F2}");

bool isStable = result.Rules == null || !result.Rules.HasViolations;
Console.WriteLine($"Stable: {isStable}");

if (isStable && result.Capability != null)
    Console.WriteLine($"Cpk = {result.Capability.Cpk:F3}");

Reading the result.ProcessAnalysis returns a result whose control chart section lists each observation with its plotted value, control limits, and any rule violations. The capability section exposes Cp, Cpk (using SigmaEstimator.MovingRange by default) and the corresponding performance indices Pp, Ppk. A process in control with a Cpk above 1.33 meets the conventional automotive/manufacturing benchmark.

  Caution

Do not pass observations in non-chronological order: the moving-range computation is order-dependent, and reordering data changes both control limits and capability estimates.

Recipe 2: Subgrouped Manufacturing Analysis (XBar-R)

Scenario. A machining centre produces shafts in a continuous run. Every hour an operator measures five consecutive shafts (a rational subgroup) and records the diameters in millimetres. Twenty-five subgroups have been collected. The engineering tolerance is 50.00 mm ± 0.05 mm.

Input shape. A jagged or rectangular 2-D collection where each inner sequence is one subgroup of equal or near-equal size. XBar-R is reliable for subgroup sizes 2–10; for larger subgroups use XBar-S instead. See Variables Charts for the supported size range.

C#
// Recipe: Subgrouped manufacturing analysis (XBar-R, n=4)
double[,] rawData = {
    { 23.1, 24.0, 22.8, 24.5 },
    { 23.3, 24.8, 22.5, 24.2 },
    { 23.7, 24.1, 23.4, 24.3 },
    { 23.6, 24.0, 23.2, 24.4 },
    { 23.0, 24.6, 22.9, 24.1 }
};
Matrix<double> subgroups = Matrix.CopyFrom(rawData);
var specs = new SpecificationLimits(Lower: 21.0, Upper: 27.0);

ProcessAnalysisResult<XBarRChartSetData> result =
    ProcessAnalysis.AnalyzeXBarR(
        subgroups, specs,
        ruleSet: ControlRuleSets.NelsonStandard);

Console.WriteLine(
    $"XBar UCL={result.ChartData.XBar.UpperControlLimit:F3}");
Console.WriteLine(
    $"Cpk = {result.Capability?.Cpk:F3}");

Reading the result. The result contains separate chart results for the XBar chart (monitoring the subgroup mean) and the R chart (monitoring the subgroup range). Stability must be evaluated on both charts. An R chart out of control indicates that within-subgroup variation itself is unstable, which invalidates the XBar control limits. If both charts show control, the capability section will report Cp and Cpk using the pooled within-subgroup range estimator.

  Caution

Do not use XBar-R with subgroup sizes larger than 10; the range statistic becomes inefficient and the d2 unbiasing constant is not defined beyond size 25. Use XBar-S for larger subgroups.

Recipe 3: Variable Sample-Size P Chart

Scenario. A circuit-board assembly line inspects every board in each production lot, but lot sizes vary from 50 to 300 boards. The quality metric is the proportion defective per lot. Because the denominator changes from lot to lot, the control limits must be recalculated individually for each point.

Input shape. Two parallel sequences of equal length: a sequence of defective counts (integers or doubles) and a sequence of sample sizes (integers). The sample sizes may all differ. Do not pre-compute proportions and pass them as the first sequence; the API requires raw counts so it can compute Poisson or binomial limits correctly.

C#
// Recipe: Variable sample-size P chart
Vector<double> defects = Vector.Create(
    new double[] { 3, 5, 2, 7, 4, 6, 3, 8, 2, 5 });
Vector<double> sizes = Vector.Create(
    new double[] { 50, 60, 45, 70, 55, 65, 50, 80, 45, 60 });

ProcessAnalysisResult<PChartData> result =
    ProcessAnalysis.AnalyzeP(defects, sizes,
        ruleSet: ControlRuleSets.NelsonStandard);

PChartData chart = result.ChartData;
Console.WriteLine($"P bar = {chart.CenterLine:F4}");

// Render with pointwise limits
for (int i = 0; i < chart.Points.Length; i++)
    Console.WriteLine(
        $"  [{i}] p={chart.Points[i]:F4} " +
        $"UCL={chart.UpperControlLimits[i]:F4}");

Reading the result. Because sample sizes differ, each plotted point has its own pair of upper and lower control limits. The result model exposes per-point limit vectors; do not read a single scalar limit from the result and apply it to all points. Points where the computed lower control limit would be negative are floored at zero. See Result Model and Rendering Semantics for details on accessing per-point vectors.

  Caution

Never pass a single scalar control limit to your chart renderer when the sample size varies; always use the per-point limit vectors from the result.

Recipe 4: Capability Analysis with Assumption Diagnostics

Scenario. After confirming an XBar-R process is in control (Recipe 2), the team needs to produce a formal capability report that includes a normality test so the customer can evaluate whether the standard Cp/Cpk indices are valid for this process.

Input shape. The same subgrouped data as Recipe 2, plus a SpecificationLimits record with at minimum one non-null limit, and the flag assumptionDiagnostics: true passed to the analysis call.

C#
// Recipe: Capability analysis with assumption diagnostics
double[] data = {
    23.1, 24.0, 22.8, 24.5, 23.3, 24.8, 22.5, 24.2,
    23.7, 24.1, 23.4, 24.3, 23.6, 24.0, 23.2, 24.4,
    23.0, 24.6, 22.9, 24.1
};
var specs = new SpecificationLimits(Lower: 21.0, Upper: 27.0);

ProcessAnalysisResult<IndividualsMovingRangeChartSetData> result =
    ProcessAnalysis.Analyze(
        Vector.Create(data),
        specifications: specs,
        ruleSet: ControlRuleSets.NelsonStandard,
        assumptionDiagnostics: true);

bool stable = result.Rules == null || !result.Rules.HasViolations;
Console.WriteLine($"Stable: {stable}");

if (result.Capability != null)
{
    Console.WriteLine($"Cp={result.Capability.Cp:F3} Cpk={result.Capability.Cpk:F3}");
    Console.WriteLine($"Pp={result.Capability.Pp:F3} Ppk={result.Capability.Ppk:F3}");
}

if (result.Assumptions?.NormalityTest != null)
    Console.WriteLine(
        $"Normality p = {result.Assumptions.NormalityTest.PValue:F4}");

Reading the result. The CapabilityAssumptionDiagnostics object is returned alongside the CapabilityAnalysisResult. Check the PValue from the Anderson-Darling test. A p-value above 0.05 does not contradict normality; a value below 0.05 warrants investigation before submitting the Cpk figure. The capability indices themselves are always computed regardless of the test outcome. For a full discussion, see Capability, Performance, and Assumption Diagnostics.

  Caution

Do not skip the stability check in Recipe 2 and jump straight to this recipe: capability computed on unstable data is not interpretable.

Recipe 5: API Round-Trip — Serialize and Restore a Result

Scenario. A web service computes SPC results on a background worker and needs to cache or transmit the result to a front-end that renders the chart. The front-end should not re-run the analysis; it should deserialize the result and read the per-point vectors directly for rendering without any further computation.

Input shape. A previously computed analysis result object. The result model is designed to be serialized (e.g., to JSON via System.Text.Json) and deserialized on the receiving side. All plotted values, control limits, and rule violation flags are stored as immutable vectors and survive a round-trip through any serializer that handles nullable numerics and vectors. See Integration and Persistence for serializer configuration details.

C#
// Recipe: Serialize and restore a result
double[] data = {
    23.1, 24.0, 22.8, 24.5, 23.3, 24.8, 22.5, 24.2, 23.7, 24.1
};
var specs = new SpecificationLimits(Lower: 21.0, Upper: 27.0);

ProcessAnalysisResult<IndividualsMovingRangeChartSetData> original =
    ProcessAnalysis.Analyze(Vector.Create(data), specifications: specs);

// Serialize
string json = original.ToJson();

// Restore (e.g., from database or API response)
ProcessAnalysisResult<IndividualsMovingRangeChartSetData> restored =
    ProcessAnalysisResult<IndividualsMovingRangeChartSetData>.FromJson(json);

Console.WriteLine(
    $"Original Cpk = {original.Capability?.Cpk:F4}");
Console.WriteLine(
    $"Restored Cpk = {restored.Capability?.Cpk:F4}");

Reading the result. After deserialization the restored object is functionally identical to the original: all indexed vectors are present at the same positions, control limits match, and rule violation flags are preserved. The receiver does not need access to the original raw data at all. This pattern is the recommended approach for any architecture that separates the computation tier from the presentation tier.

  Caution

Do not re-run the analysis on the front-end using only the deserialized result as input; serialize and transmit the result, not the raw data, to avoid recomputing control limits with a different sample context.

See Also