getScaledScoreFromMinMax currently uses a split-range approach to convert a raw score to a percentage:
|
export function getScaledScoreFromMinMax(score, minScore, maxScore) { |
|
// range split into negative/positive ranges (rather than min-max normalization) depending on score |
|
const range = (score < 0) ? Math.abs(minScore) : maxScore; |
|
if (!range) return 0; |
|
return Math.round((score / range) * 100); |
|
} |
Negative scores are normalised against minScore; positive scores against maxScore. Zero is a fixed anchor to preserve its semantic meaning in penalty-based scoring (e.g. "broke even").
This scaled logic is used when determining isPassed/isFailed, and the output value may be used in visualisations such as dashboards and results.
Is this the correct base logic to apply, as it is possible that the values may differ to what is actually recorded externally. Is it important that internal display and what is reported to an LMS/LRS apply the same intent?
Logic examples
Using minScore = -5, maxScore = 10 as an example:
| score |
split-range (current) |
linear [0, 100] (SCORM 1.2) |
linear [-1, 1] (SCORM 2004 / xAPI) |
split-range [-1, 1] (SCORM 2004 / xAPI) |
| -5 |
-100 |
0 |
-1.00 |
-1.00 |
| -2.5 |
-50 |
16.7 |
-0.667 |
-0.50 |
| 0 |
0 |
33.3 |
-0.333 |
0.00 |
| 5 |
50 |
66.7 |
0.333 |
0.50 |
| 10 |
100 |
100 |
1.00 |
1.00 |
Note: SCORM 2004 and xAPI don't mandate a formula for deriving scaled from raw, min, and max — the spec only constrains the output to [−1, 1]. The content or authoring tool is responsible for setting the value. LMS/LRS mastery threshold checks are a simple numeric comparison (scaled >= threshold) with no dependency on how scaled was derived. This means both the linear and split-range approaches are spec-compliant for SCORM 2004 and xAPI.
Split-range (current behaviour)
- Zero is always neutral at 0
minScore maps to −100, maxScore maps to +100
- −50 and +50 represent different absolute quantities when
minScore ≠ maxScore
- Well-suited for when zero has semantic meaning ("broke even")
Linear [0, 100] — SCORM 1.2
minScore maps to 0, maxScore maps to 100
- Zero has no special position (maps to 33.3 in the example above)
- Appropriate when zero is arbitrary and the concern is relative position within the achievable range
Linear [−1, 1] — SCORM 2004 / xAPI
minScore maps to −1.0, maxScore maps to +1.0
- Zero has no special position (maps to −0.333 in the example above)
- Appropriate when zero is arbitrary and the concern is relative position within the achievable range
Split-range [−1, 1] — SCORM 2004 / xAPI
- The same logic as the current function, scaled to
[−1, 1] rather than [−100, 100]
Summary
In practice this distinction rarely surfaces because most content has minScore = 0, making all formulas produce identical results, but we need to decide which logic to use between the split-range approach and linear normalisation, or if this is something that needs to be configurable depending on the scoring design pattern.
- split-range [-100, 100]: aligns with SCORM 2004 and xAPI. Negative logic cannot be derived in SCORM 1.2, so display and reported values differ.
- linear [−100, 100]: aligns with SCORM 2004 and xAPI. Negative logic cannot be derived in SCORM 1.2, so display and reported values differ.
- linear [0, 100]: aligns with SCORM 1.2, can align with SCORM 2004, xAPI. Negative logic cannot be inferred for SCORM 1.2, but is possible for SCORM 2004 and xAPI using the
min, max, raw values.
This would suggest option 3 is the most appropriate for use in getScaledScoreFromMinMax and in turn scaledScore for each scoring set. We could create an additional method to include split-range, logic as we did for averageScaledScore in v2.
getScaledScoreFromMinMaxcurrently uses a split-range approach to convert a raw score to a percentage:adapt-contrib-scoring/js/utils.js
Lines 187 to 192 in 93c6939
Negative scores are normalised against
minScore; positive scores againstmaxScore. Zero is a fixed anchor to preserve its semantic meaning in penalty-based scoring (e.g. "broke even").This scaled logic is used when determining
isPassed/isFailed, and the output value may be used in visualisations such as dashboards and results.Is this the correct base logic to apply, as it is possible that the values may differ to what is actually recorded externally. Is it important that internal display and what is reported to an LMS/LRS apply the same intent?
Logic examples
Using
minScore = -5, maxScore = 10as an example:Split-range (current behaviour)
minScoremaps to −100,maxScoremaps to +100minScore ≠ maxScoreLinear [0, 100] — SCORM 1.2
minScoremaps to 0,maxScoremaps to 100Linear [−1, 1] — SCORM 2004 / xAPI
minScoremaps to −1.0,maxScoremaps to +1.0Split-range [−1, 1] — SCORM 2004 / xAPI
[−1, 1]rather than[−100, 100]Summary
In practice this distinction rarely surfaces because most content has
minScore = 0, making all formulas produce identical results, but we need to decide which logic to use between the split-range approach and linear normalisation, or if this is something that needs to be configurable depending on the scoring design pattern.min,max,rawvalues.This would suggest option 3 is the most appropriate for use in
getScaledScoreFromMinMaxand in turnscaledScorefor each scoring set. We could create an additional method to include split-range, logic as we did foraverageScaledScorein v2.