Capability and Diagnostics
Process capability analysis quantifies how well a process meets its specification limits. Because capability metrics assume that observed variation reflects only the natural, common-cause behaviour of the process, they are only meaningful when the process is first confirmed to be statistically stable. Running a capability study on an out-of-control process produces numbers that describe a mixture of assignable causes and true process behaviour, and the result cannot be used to predict future conformance.
This page describes the capability and performance indices supported by the library, the sigma estimators available, the optional assumption diagnostics that accompany a capability result, and the interpretation cautions every practitioner should keep in mind.
Capability vs. Performance
The library distinguishes two families of indices that differ only in which sigma estimate they use:
Capability indices (
, , , ) use the within-process sigma, an estimate of common-cause variation only. This reflects what the process is capable of when running in a stable state, stripping out any between-subgroup or assignable-cause variation.Performance indices (
, , , ) use the overall sigma, the standard deviation computed across the entire sample period, incorporating all sources of variation including shifts, drifts, and special causes that occurred during the study.
Both families measure the spread of the process distribution relative
to the width of the specification window. A
Supported Metrics
The following indices are returned by CapabilityAnalysisResult. Any index whose required specification limit has not been supplied is returned as null.
BeforeTextInTextIndex | Formula | Required limits | Sigma used |
|---|---|---|---|
Both USL and LSL | Within | ||
At least one of USL or LSL | Within | ||
LSL | Within | ||
USL | Within | ||
Both USL and LSL | Overall | ||
At least one of USL or LSL | Overall | ||
LSL | Overall | ||
USL | Overall |
The Mean, WithinSigma, and OverallSigma properties expose the underlying estimates used in all index calculations. WithinSigmaEstimator records which SigmaEstimator was applied.
Specification Inputs
Specification limits are supplied through the SpecificationLimits class, which carries three nullable fields:
Lower: the lower specification limit (LSL). May be null for one-sided upper processes.
Upper: the upper specification limit (USL). May be null for one-sided lower processes.
Target: the process target value. Optional; not required for any of the standard
/ family indices, but used by target-centred metrics if present.
Passing only one limit produces a one-sided study: indices that
require the absent limit are returned as null,
while indices that rely only on the supplied limit (e.g.,
The Specifications property echoes back the SpecificationLimits instance used in the analysis, making round-tripped results self-describing.
Sigma Estimators
The within-process sigma used for
estimates sigma from the average moving range of successive
individual observations (
estimates sigma from the average within-subgroup range or
standard deviation (
uses the ordinary sample standard deviation of all
observations, pooled across all subgroups or time points. This
is the estimator used for the performance
indices (
Choosing the wrong estimator is a common source of misleading results. For example, applying MovingRange to subgrouped data ignores within-subgroup variation and will over-estimate capability. See Common Pitfalls for a detailed discussion of estimator misuse.
Assumption Diagnostics
All standard capability indices rest on an assumption that the process measurements are normally distributed within the specification window. The library can optionally test this assumption and return the result alongside the capability indices.
To enable assumption diagnostics, pass a normality test kind to Capability: Capability.Analyze(..., normalityTest: TestOfNormality.AndersonDarling). The Diagnostics property on the result returns an AssumptionDiagnostics? object that exposes:
the normality test result, which provides the test statistic and p-value through its Statistic and PValue properties.
advisory diagnostic messages generated during assumption analysis.
The AssumptionDiagnostics exposes the normality test result via its NormalityTest property. The test result carries a Statistic and a PValue. A low p-value (conventionally below 0.05) provides evidence against normality. This does not prevent the capability indices from being computed; diagnostics are advisory. However, a significant normality failure should prompt the practitioner to investigate whether a transformation or a non-normal capability model is more appropriate before reporting results to a customer or regulatory body.
Confidence Intervals
The properties on CapabilityAnalysisResult return point estimates. Because capability indices are sample statistics, any single estimate carries uncertainty that grows substantially as sample size decreases.
For formal reporting or customer-facing submissions where interval estimates are required, use Capability via its static Analyze method. This companion type provides access to distributional properties of the indices and can produce confidence intervals appropriate to each estimator. Alternatively, bootstrap or simulation-based intervals can be constructed by re-sampling the underlying data and aggregating results across replications.
Code Examples
The following examples show the three most common capability analysis patterns.
Subgrouped capability analysis (XBar‑R study)
double[,] rawData = {
{ 9.8, 10.2, 10.0, 10.1 },
{ 10.3, 9.9, 10.1, 10.2 },
{ 9.7, 10.4, 9.8, 10.0 },
{ 10.1, 10.0, 9.9, 10.3 },
{ 10.2, 9.8, 10.2, 10.1 }
};
Matrix<double> subgroups = Matrix.CopyFrom(rawData);
var specs = new SpecificationLimits(Lower: 9.0, Upper: 11.0);
CapabilityAnalysisResult cap = Capability.Analyze(subgroups, specs);
Console.WriteLine($"Cp = {cap.Cp:F4}");
Console.WriteLine($"Cpk = {cap.Cpk:F4}");
Console.WriteLine($"Pp = {cap.Pp:F4}");
Console.WriteLine($"Ppk = {cap.Ppk:F4}");Individuals capability analysis (I‑MR study)
double[] data = {
10.5, 11.2, 10.8, 11.5, 10.3, 11.8, 10.1, 11.4,
10.9, 11.1, 10.6, 11.3, 10.7, 11.0, 10.4
};
var specs = new SpecificationLimits(Lower: 9.0, Upper: 13.0);
CapabilityAnalysisResult cap = Capability.Analyze(
Vector.Create(data), specs,
withinSigma: SigmaEstimator.MovingRange);
Console.WriteLine($"Cp = {cap.Cp:F4}");
Console.WriteLine($"Cpk = {cap.Cpk:F4}");
Console.WriteLine($"Within sigma = {cap.WithinSigma:F4}");
Console.WriteLine($"Overall sigma = {cap.OverallSigma:F4}");Capability with assumption diagnostics
double[] data = {
10.5, 11.2, 10.8, 11.5, 10.3, 11.8, 10.1, 11.4,
10.9, 11.1, 10.6, 11.3, 10.7, 11.0, 10.4
};
var specs = new SpecificationLimits(Lower: 9.0, Upper: 13.0);
Vector<double> observations = Vector.Create(data);
// v2: chart analysis and capability analysis are separate steps
IndividualsMovingRangeChartSet chart =
new IndividualsMovingRangeChartSet(observations);
chart.Analyze();
// Request normality testing as part of capability analysis
CapabilityAnalysisResult cap = Capability.Analyze(
observations, specs,
normalityTest: TestOfNormality.ShapiroWilk);
Console.WriteLine($"Cpk = {cap.Cpk:F4}");
Console.WriteLine($"Ppk = {cap.Ppk:F4}");
// Access assumption diagnostics from the capability result
AssumptionDiagnostics? diag = cap.Diagnostics;
if (diag?.NormalityTest != null)
Console.WriteLine(
$"Normality p-value = {diag.NormalityTest.PValue:F4} " +
$"({diag.NormalityTest.Name})");Interpretation Cautions
Keep the following limitations in mind when interpreting results:
Unstable process. Capability indices computed from an out-of-control process are statistically undefined. They conflate assignable-cause variation with common-cause variation, so no sigma estimator can produce a meaningful within-sigma. Always confirm stability first.
Non-normality. The
/ family assumes that process measurements are normally distributed. When measurements are skewed, multimodal, or bounded, the indices overstate or understate the actual proportion non-conforming. The assumption diagnostics section above explains how to detect this condition.Small sample size. Point estimates of
are highly variable for fewer than 50–100 observations. A study with 20 subgroups of size 5 (n = 100) provides a reasonably tight interval around the true ; a study with 10 subgroups of size 3 (n = 30) does not.Estimator sensitivity. The choice of within-sigma estimator has a large effect on
and . When the process is stable, MovingRange, WithinRange, and OverallStandardDeviation converge. When they diverge, that divergence itself is a diagnostic signal worth investigating.One-sided vs. two-sided
. When only one specification limit is supplied, reduces to either or alone. This is valid for inherently one-sided processes (e.g., a minimum strength requirement with no upper limit), but reporting a one-sided for a nominally two-sided tolerance can be misleading.