Process Capability: Adjust parameter estimation precision settings#462
Process Capability: Adjust parameter estimation precision settings#462JTPetter wants to merge 6 commits intojasp-stats:masterfrom
Conversation
JTPetter
commented
Apr 17, 2026
- fixes [Bug]: Capability - Non normal data - Parameter estimation jasp-issues#4142
- Adjusted precision settings, where it leads to results closer to other software
- Adjusted unit tests (decimal changes) and created new unit tests based on [Bug]: Capability - Non normal data - Parameter estimation jasp-issues#4142
There was a problem hiding this comment.
Pull request overview
Adjusts non-normal parameter estimation behavior in the Process Capability Studies analysis to better match results from other software (per jasp-issues#4142), with corresponding test and snapshot updates.
Changes:
- Refactors/standardizes distribution parameter estimation usage (notably for Weibull/lognormal) and related probability-table calculations.
- Fixes operator-precedence issues in process capability plot x-axis limit expansion for historical parameters.
- Updates/adds unit tests, snapshots, and introduces an additional real-world dataset for verification.
Reviewed changes
Copilot reviewed 4 out of 30 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
R/processCapabilityStudies.R |
Fixes precedence in historical-limit logic; refactors probability-table parameter columns and AD/p-value computation flow. |
R/commonQualityControl.R |
Aligns distribution fitting behavior by removing custom fitdist control tolerances in shared QC helpers. |
tests/testthat/test-processCapabilityStudies.R |
Updates numeric expectations; adds additional verification coverage for Weibull using a new dataset; updates plot snapshots. |
tests/testthat/datasets/processCapabilityStudy/realDataExample2.csv |
Adds a new dataset to reproduce/verify the reported discrepancy scenario. |
tests/testthat/_snaps/processCapabilityStudies/*.svg |
Updates and adds vdiffr snapshots affected by the estimation changes and new tests. |
renv.lock |
Modifies the recorded metadata for renv in the lockfile. |
| table$addColumnInfo(name = "parameterEstimate1", title = gettextf("Shape (%1$s)", "\u03B2"), type = "number") | ||
| table$addColumnInfo(name = "parameterEstimate2", title = gettextf("Scale (%1$s)", "\u03B8"), type = "number") | ||
| } | ||
| table$addColumnInfo(name = "ad", title = gettext("AD"), type = "integer") |
There was a problem hiding this comment.
In .qcProbabilityTable(), the AD column is declared as type integer, but the code computes and stores non-integer Anderson–Darling statistics (rounded to .numDecimals). This will likely display incorrectly in the UI (e.g., 0.43 shown as 0). Change the column type to number (and optionally add a numeric format) to match the actual values.
| table$addColumnInfo(name = "ad", title = gettext("AD"), type = "integer") | |
| table$addColumnInfo(name = "ad", title = gettext("AD"), type = "number") |
| adStatistic <- andersonDarlingTest$statistic | ||
| adStatisticAdjusted <- adStatistic * (1 + (0.75 / sampleSize) + (2.25 / (sampleSize^2))) | ||
| if (adStatistic >= 0.6) { | ||
| pValue <- exp(1.2937 - (5.709 * adStatisticAdjusted) + 0.0186 * (adStatisticAdjusted^2)) | ||
| } else if (adStatisticAdjusted < 0.6 && adStatisticAdjusted > 0.34) { | ||
| pValue <- exp(0.9177 - (4.279 * adStatisticAdjusted) - 1.38 * (adStatisticAdjusted^2)) | ||
| } else if (adStatisticAdjusted < 0.34 && adStatisticAdjusted > 0.2) { |
There was a problem hiding this comment.
The p-value approximation branches mix adjusted and unadjusted AD statistics: the first threshold compares adStatistic to 0.6, while subsequent branches compare adStatisticAdjusted. Since the approximation formulas use the adjusted statistic, the thresholding should be based on the adjusted value consistently (otherwise you can select the wrong branch for small/medium sample sizes).