FAQ
This page answers the questions most frequently asked by developers integrating the Numerics.NET SPC API into their applications. Questions are organized thematically; use the section links below to jump directly to an answer.
Does the library render charts?
No. The Numerics.NET SPC API is a pure computation library; it calculates control limits, plotted values, rule violation flags, and capability indices, but it does not produce any graphical output. All results are returned as structured data objects whose per-point arrays can be consumed directly by any charting library (e.g., OxyPlot, LiveCharts, a JavaScript renderer via a web API, or a BI tool).
This separation of computation from presentation is intentional: it allows the same result to be rendered in a desktop application, a web browser, a PDF report, or a real-time dashboard without any change to the analysis code. See Result Model and Rendering Semantics for the full description of what each result property contains and how to map it to a chart renderer.
Does it support streaming or stateful monitoring?
The library supports Phase II (monitoring) analyses via the Deploy() / Apply(Vector<Double>) lifecycle: fit a chart from Phase I baseline data, call Deploy() to freeze the control limits, then call Apply(Vector<Double>) on each new batch of observations. Each Apply(Vector<Double>) call produces a new fitted chart evaluated against the frozen baseline.
Per-observation incremental (streaming) updates are not supported. Observations must be passed in as a complete batch. See Integration and Persistence for how to store and reuse deployed charts across sessions.
Why were NaN values rejected?
The library enforces data contracts that vary by chart type. For most chart types, NaN values at specific positions have defined semantics (they mark missing observations and affect moving-range chains or effective subgroup sizes), but passing NaN in a context where it is not contractually permitted, for example as a subgroup size in a P or U chart, or as a required parameter value, raises an exception or returns a diagnostic.
Check the data contract for the specific chart you are using in Data Contracts and Preparation. In most cases the correct action is to filter out records with missing measurements before calling the analysis, or to substitute a sentinel value and exclude those observations from the control limit calculation.
Why do P/U chart limits vary by point?
P and U chart control limits depend on the sample size (lot size or inspection area) for each plotted point. When that size changes from point to point, the binomial or Poisson variance changes too, and the 3-sigma limits must be recalculated individually for each observation. This is statistically correct behaviour, not a bug.
The result object exposes the per-point limit arrays for exactly this reason; see Result Model and Rendering Semantics for the array properties, and Common Pitfalls: Rendering Variable-Sample P/U Charts from Scalar Limits for the consequences of using a scalar limit instead.
Can I compute capability before checking control?
The API does not block you from computing capability on any dataset, regardless of whether a preceding control chart shows violations. However, doing so produces statistically meaningless results. Capability indices assume that the process is in a state of statistical control; the within-sigma estimate has no valid interpretation when assignable causes are present.
Always run and evaluate a control chart for the same dataset before interpreting any capability output. This is not just a convention. It is a logical dependency: you cannot estimate “how well the process could perform if stable” if you have not established that it is stable. See Capability, Performance, and Assumption Diagnostics and Common Pitfalls: Computing Capability on Unstable Data.
Can I use Nelson rules on EWMA or CUSUM?
No, and doing so will produce a high false-alarm rate. Nelson (and Western Electric) run rules were designed for Shewhart charts, where successive plotted values are approximately independent. EWMA and CUSUM statistics are explicitly autocorrelated by construction; each value carries memory of all previous observations.
For EWMA and CUSUM, the sole signalling mechanism is a point crossing outside the computed control limits. The increased sensitivity of these charts to small shifts derives from the cumulation algorithm itself, not from run rules. Applying Shewhart rules to CUSUM or EWMA output is a well-documented misuse. See Time-Weighted Charts (EWMA and CUSUM) and Common Pitfalls: Applying Standard Rules to EWMA/CUSUM.
What is the difference between and ?
Both
Can I serialize results for later rendering?
Yes. After calling Analyze(), the fitted chart object contains all data needed for rendering (plotted values, control limits, center lines, and rule violation flags) as plain value arrays, with no external references to the original raw data. The chart can therefore be serialized to JSON via ToJson() and deserialized on a different process, machine, or time without any loss of rendering fidelity.
See Integration and Persistence for serializer configuration notes, and the Worked Examples Recipe 5: API Round-Trip for a complete code example.
Why did XBar‑R reject my subgroup size?
XBar‑R is designed for subgroup sizes between 2 and 10 (inclusive). For subgroup sizes of 1, use an Individuals-MR chart. For sizes above 10, use XBar‑S, which uses the subgroup standard deviation rather than the range and remains efficient for larger subgroups. The library documents supported sizes in the Support Matrix.
Attempting to use XBar‑R outside its supported range produces an
exception or a diagnostic rather than a silently incorrect result,
because the
What is the minimum sample size?
There is no hard minimum enforced by the API, but statistical practice recommends at least 20–25 subgroups (or 100–125 individual observations) before treating control limits as reliable estimates. Fewer than 15 subgroups produces control limits with wide uncertainty bands; the resulting chart can neither reliably detect signals nor confidently confirm stability.
For capability analysis specifically, at least 50 observations are
needed for point estimates of