Static runtime that loads a CogFlow Builder export and runs it via jsPsych.
- Recommended workflow (JATOS)
- JATOS setup (Component Properties)
- What this runtime does
- Config loading modes
- Quick start (local)
- Debugging and validation flags
- Gabor Cue-Contingent Learning and Reward Metadata
- Supported tasks and component types
- Special paradigms
- Trial-based tasks
- Eye tracking (WebGazer)
- Current scope / assumptions
- Files
- Repositories
The default “demo-ready” deployment path is:
- Build a config in the Builder
- Export it to the CogFlow Token Store (Cloudflare Worker + KV, optional R2 assets)
- Run the Interpreter inside JATOS, loading the config via JATOS Component Properties (no fragile URL params)
This repo includes a JATOS entry wrapper: index_jatos.html.
Recommended asset layout inside your JATOS study assets for the Interpreter component:
- Component HTML file:
index_jatos.html - Interpreter runtime files live under:
interpreter/(so the wrapper can load/publix/.../interpreter/src/...)
In the Interpreter component’s Component Properties (user-defined properties), set either single-config settings or a multi-config bundle.
Set:
config_store_base_url: Token Store base URL (e.g.,https://<your-worker>.workers.dev)config_store_config_id: config id from the Builder exportconfig_store_read_token: read token from the Builder export
If you want to run multiple configs as one session (randomized order, sequential execution), set:
config_store_base_urlconfig_store_code(any label used to tag the session, e.g.TEST001)config_store_configs(array)
Example:
{
"config_store_base_url": "https://<your-worker>.workers.dev",
"config_store_code": "TEST001",
"config_store_configs": [
{ "config_id": "...", "read_token": "...", "task_type": "rdm", "filename": "..." },
{ "config_id": "...", "read_token": "...", "task_type": "sart", "filename": "..." }
]
}Tip: the Builder includes a JATOS Props button that generates this JSON automatically from your Token Store exports.
If your JATOS UI can’t save arrays/objects as Component Properties, you can alternatively set:
config_store_configs_json: a JSON string containing the array (or{ "configs": [...] })
Notes:
- Do not show tokens to participants. The interpreter keeps token-store loading UI hidden unless
?debug=1. - The interpreter no longer relies on
?id=...in JATOS (seewindow.COGFLOW_DISABLE_URL_IDinindex_jatos.html).
Token Store note:
config_store_base_urlshould point to your Token Store Worker URL for the deployment (not a personal/demo Worker).
On completion inside JATOS, the interpreter:
- uploads a
cogflow-results-...jsonfile as a JATOS result file (preferred) - submits a small JSON summary object as Result Data (rather than a raw array blob)
If the result-file upload fails for any reason, it falls back to submitting the full JSON payload in Result Data.
CogFlow Interpreter is a static jsPsych runtime that loads a CogFlow config (often from the Token Store inside JATOS), compiles it into a jsPsych timeline, runs it, and uploads results back to JATOS.
Key features:
- Token Store loading (single-config or multi-config bundle via JATOS Component Properties)
- Block expansion (parameter windows/ranges) + adaptive blocks (QUEST/staircase), including continuous-mode
block_sizing_mode: "by_duration"where block seconds are converted to frame counts via experimentframe_rate - Numeric list-range shorthand fallback at runtime (for robustness): if a list string like
1-4appears in config values, it is interpreted as1,2,3,4during sampling - Structural marker normalization and ordering semantics:
- loop marker pairs (
loop-start/loop-end) are normalized to loop nodes and expanded by iteration count - randomization marker pairs (
randomize-start/randomize-end) are normalized to randomization groups and shuffled once per run - items outside a randomization group remain in authored order (immutable relative to surrounding timeline)
- loop marker pairs (
- RDM dot-groups runtime switching:
- if
dynamic_target_group_switch_enabledis true, the compiler normalizes the exportedN-Nframe range and the engine alternates the active target group at random intervals drawn from that range - cue borders in
target-group-colormode follow the active target group in real time - response correctness and feedback use the current live target group at the moment of response
- if
- RDM dot-groups dependent direction of movement:
- if
dependent_direction_of_movement_enabledis true, the independent group direction fields are replaced bydependent_group_1_direction(base range) anddependent_group_direction_difference(offset list); at block expansion time, group 1's direction is sampled from the base range and group 2's direction is computed as(group_1_direction + sampled_difference) mod 360
- if
- Trial-based tasks + continuous-mode tasks (including SOC Dashboard)
- DRT (Detection Response Task) scheduling via explicit start/stop components (ISO defaults supported), with automatic probe-safe behavior:
- MW probes inside active DRT segments are auto-bracketed by inserted DRT stop/start markers
- MOT probe/choice phases pause DRT and resume it after probe completion
- Rewards v2 integration (compile-time wrapping + runtime screens/milestones)
- Optional eye tracking via WebGazer (permission + calibration injection, plus output bundling)
- Theming support via
ui_settings.theme(from Builder exports)
The interpreter supports multiple ways to load a config. In production, prefer the JATOS + Token Store path.
-
Primary (JATOS): Token Store settings from Component Properties
- Single-config:
config_store_base_url,config_store_config_id,config_store_read_token - Multi-config:
config_store_base_url,config_store_code,config_store_configs(array)
- Single-config:
-
Secondary (local / legacy):
?id=YOUR_ID- Loads
configs/YOUR_ID.json idis sanitized to[A-Za-z0-9_-].
- Loads
-
Multi-config mode (local / legacy):
?id=XXXXXXX(7 alphanumeric characters)- Loads all
configs/XXXXXXX-*.json, shuffles their order, and runs them as one jsPsych session. - File discovery is best-effort:
- If the server exposes a directory listing for
configs/, it will be scraped. - Otherwise, create/update
configs/manifest.json(array of filenames) and it will be used.
- If the server exposes a directory listing for
- Loads all
-
Optional remote config sources (e.g., SharePoint)
?base=...sets the directory/URL used for loading configs (default:configs).?manifest=...sets an explicit manifest JSON URL (recommended for SharePoint).- Example:
index.html?id=ABC1234&base=https://your-site/configs&manifest=https://your-site/configs/manifest.json
-
Fallback: no
id→ you can upload a JSON file via the UI.
Use VS Code Live Server on index.html.
Example:
http://127.0.0.1:5500/index.html?id=experiment_config_2026-01-16
Multi-config example:
http://127.0.0.1:5500/index.html?id=ABC1234
Note: the exact URL prefix depends on your Live Server workspace root; the important part is index.html?id=....
If Live Server doesn't expose a directory listing, generate/update the manifest:
- PowerShell:
powershell -ExecutionPolicy Bypass -File scripts/generate-manifest.ps1
Debugging (local):
- Add
&debug=1to auto-download the jsPsych data CSV on finish.- Example:
.../index.html?id=ABC1234&debug=1
- Example:
- Optional:
&debug=jsonto download JSON instead.- If eye tracking is enabled, debug mode also downloads a second gaze-only JSON file:
cogflow-eye-tracking-...json. - Debug mode also shows an on-screen eye-tracking HUD (when eye tracking is enabled) to confirm that samples are accumulating.
- If eye tracking is enabled, debug mode also downloads a second gaze-only JSON file:
Validation (local):
- Add
&validate=1to run a quick console self-check of adaptive blocks (QUEST/staircase) and Gabor parameter propagation.- This compiles a separate timeline for validation so it does not affect the real run.
- Use
&validate=onlyto run validation without starting the experiment. - Example sample configs included (single-task to match Builder validators):
- Gabor QUEST:
.../index.html?id=sample_adaptive_gabor_quest&validate=1&debug=1 - RDM staircase:
.../index.html?id=sample_adaptive_rdm_staircase&validate=1&debug=1
- Gabor QUEST:
Gabor-specific debug:
&gabor_debug=1(or&debug=1) enforces longer stimulus/mask durations for visibility.- In debug mode, each stimulus patch overlays
freq=... cyc/px.- If it shows
freq=0.0000, your config likely rounded the spatial frequency somewhere upstream (common cause: treatingspatial_frequency_cyc_per_pxlike a pixel integer).
- If it shows
The runtime supports cue-coupling and reward-availability tagging for Gabor trial generation and learning loops.
- Spatial cue validity coupling:
- Uses
spatial_cue_validity_probabilityfor unilateral cues (left/right). - Sets per-trial
spatial_cue_validand flips/keepstarget_locationaccordingly.
- Uses
- Value target coupling:
- Uses
value_target_value(high|low|neutral|any) to optionally force the target side to match the selected cue value.
- Uses
- Reward availability by cue value:
- Uses
reward_availability_high,reward_availability_low,reward_availability_neutral. - Sets per-trial
reward_availableandreward_availability_probabilitymetadata.
- Uses
Reward integration behavior:
- If a trial carries
reward_available=false, reward awarding is blocked for that trial even when correctness/RT criteria pass.
Per-trial output includes these fields when present:
spatial_cue_validvalue_target_valuereward_availablereward_availability_probability
The Interpreter primarily consumes timeline[] items by their type.
Common components:
html-keyboard-response(includes Builder-authored Instructions)html-button-responseimage-keyboard-responsesurvey-responsemw-probe(compiled through the same survey-response plugin withplugin_type: "mw-probe")visual-angle-calibrationreward-settingsblockdetection-response-task-start,detection-response-task-stop
Survey / MW conditional question visibility:
survey-responseandmw-probequestions can includevisible_if = { question_id, equals }.- Runtime applies conditions live as responses change and validates required fields only for currently visible questions.
- Legacy conditional keys (
show_if_question_id,show_if_value) remain supported.
Structural timeline nodes (typically normalized from Builder markers):
looprandomize-group
Randomization scope:
- The interpreter shuffles only the children of each
randomize-group. - Timeline items outside that group are not reordered.
Task components:
- RDM:
rdm-trial,rdm-practice,rdm-adaptive,rdm-dot-groups(continuous exports may compile contiguous frames intordm-continuoussegments)- During RDM block expansion, the Interpreter applies Builder-exported timing windows (
stimulus_duration,response_deadline,inter_trial_interval) and direction transition schedules (random_each_trial,every_n_trials,exact_count). - For
rdm-dot-groups, the Interpreter also honorsdynamic_target_group_switch_enabledplusdynamic_target_group_every_n_frames. When enabled, it samples a random inclusive frame interval from the exported range, flipsresponse_target_groupbetween group 1 and group 2 at each interval, updates cue-border target coloring against the live target group, and scores responses against that live target rather than only the initial block state. rdm-dot-groupsalso supports dependent direction of movement: whendependent_direction_of_movement_enabledis true, each generated trial samples a base direction fromdependent_group_1_directionand an offset fromdependent_group_direction_difference, then setsgroup_2_direction = (base + offset) mod 360before the trial runs.
- During RDM block expansion, the Interpreter applies Builder-exported timing windows (
- Flanker:
flanker-trial - SART:
sart-trial - Gabor:
gabor-trial - Stroop:
stroop-trial - Emotional Stroop:
emotional-stroop-trial(runs through the same plugin as Stroop, forcedresponse_mode: "color_naming") - Simon:
simon-trial - PVT:
pvt-trial - Task Switching:
task-switching-trial - MOT:
mot-trial - N-back:
nback-block(compiled by trial-based or continuous N-back plugins depending on config defaults) - Continuous Image Presentation (CIP):
continuous-image-presentation(typically generated by ablockwithcomponent_type: "continuous-image-presentation") - SOC Dashboard:
soc-dashboardwithsubtasks[]typessart-like,nback-like,flanker-like,wcst-like,pvt-like
Emotional Stroop notes:
- Builder exports top-level defaults under
emotional_stroop_settings(includingword_lists/word_optionsand the shared Stroop inkstimuli). - During Block expansion, the compiler couples list selection to word selection so the per-trial metadata
word_list_label/word_list_indexstays coherent.
Continuous Image Presentation is a block-driven paradigm: a single timeline[] Block expands into one jsPsych trial per selected image.
The CIP plugin must be available globally as:
window.jsPsychContinuousImagePresentation
This repo’s index.html and index_jatos.html load src/jspsych-continuous-image-presentation.js.
The Interpreter expects CIP blocks to include fully-resolved asset URLs inside block.parameter_values (exported by the Builder after CIP assets are generated/applied):
cip_image_urls(newline- or comma-separated list; required)cip_mask_to_image_sprite_urls(newline- or comma-separated list; optional)cip_image_to_mask_sprite_urls(newline- or comma-separated list; optional)
Additional per-block settings (also read from parameter_values):
cip_image_duration_ms,cip_transition_duration_ms,cip_transition_framescip_choice_keyscip_repeat_mode,cip_images_per_block
If a config contains CIP blocks but cip_image_urls is missing/empty (or the block would generate 0 trials), the Interpreter shows a blocking Interpreter error with diagnostics.
This prevents the study from silently “ending” right after instructions.
Task Switching runs via a custom jsPsych plugin (loaded as window.jsPsychTaskSwitching).
Builder exports Task Switching experiment-wide defaults under:
{
"task_switching_settings": {
"stimulus_set_mode": "letters_numbers",
"stimulus_position": "top",
"border_enabled": false,
"left_key": "f",
"right_key": "j",
"cue_type": "explicit",
"task_1_cue_text": "LETTERS",
"task_2_cue_text": "NUMBERS",
"cue_font_size_px": 28,
"cue_duration_ms": 0,
"cue_gap_ms": 0,
"cue_color_hex": "#FFFFFF",
"task_1_position": "left",
"task_2_position": "right",
"task_1_color_hex": "#FFFFFF",
"task_2_color_hex": "#FFFFFF",
"tasks": [
{ "category_a_tokens": [], "category_b_tokens": [] },
{ "category_a_tokens": [], "category_b_tokens": [] }
]
}
}Notes:
stimulus_set_mode: "letters_numbers"uses built-in scoring:- Task 1 (letters): vowel vs consonant
- Task 2 (numbers): odd vs even
stimulus_set_mode: "custom"usestasks[0]andtasks[1]token sets.
- The compiled Task Switching trial displays a combined stimulus (task-1 token + task-2 token, e.g.
A 2) on every trial. - Correctness uses the task-relevant token:
- letters task scores
stimulus_task_1 - numbers task scores
stimulus_task_2
- letters task scores
- Cueing modes:
explicit: showstask_1_cue_text/task_2_cue_text(and timing/color fields)position: stimulus position varies by task viatask_1_position/task_2_positioncolor: stimulus color varies by task viatask_1_color_hex/task_2_color_hex
MOT runs via a custom jsPsych plugin (loaded as window.jsPsychMot).
- Timeline
type:"mot-trial"(plugin:src/jspsych-mot.js) - Compiled by:
src/timelineCompiler.js(loadswindow.jsPsychMot) - Optional global defaults: top-level
mot_settingsis merged into each MOT trial at compile time.
- Cue — targets flash at
cue_flash_rate_hzHz forcue_duration_msms; objects move during this phase. - Tracking — all objects continue moving unlabeled for
tracking_duration_msms. - Probe — participant identifies targets. Two modes:
click: participant clicks objects; trial ends whennum_targetsare selected.number_entry: each object shows its 1-based index; participant types numbers and presses Enter.
yes_no_recognition: runtime highlights one probe object at a time; participant responds Yes/No (keyboard or buttons).recognition_probe_countcontrols how many probes are asked before advancing.
- Feedback (optional,
show_feedback: true) — color rings indicate hits (green), misses (red), and false alarms (orange); shown forfeedback_duration_msms. - ITI — blank screen for
iti_msms.
| Parameter | Default | Description |
|---|---|---|
num_objects |
8 | Total number of moving objects |
num_targets |
4 | Number of objects to track (highlighted during cue) |
speed_px_per_s |
150 | Movement speed (pixels/second) |
motion_type |
"linear" |
"linear" (bounce/wrap) or "curved" (smooth turning) |
probe_mode |
"click" |
"click", "number_entry", or "yes_no_recognition" |
yes_key |
"y" |
Yes key used in recognition mode |
no_key |
"n" |
No key used in recognition mode |
recognition_probe_count |
1 |
Number of sequential yes/no probes per trial (capped by object count) |
cue_duration_ms |
2000 | Duration of cue phase |
tracking_duration_ms |
8000 | Duration of tracking phase |
iti_ms |
1000 | Inter-trial interval |
show_feedback |
false |
Whether to show post-probe feedback |
num_correct— targets correctly identifiednum_false_alarms— non-targets selectednum_missed— targets not selectedrt_first_response_ms— RT to first response from probe onsetselected_objects— JSON array of 0-based object indices selectedclicks— JSON array of click events (x, y, t_ms, object_hit, object_idx)recognition_response,recognition_response_key,recognition_is_yes,recognition_correct— last recognition-probe response summaryrecognition_probe_count— number of recognition probes asked in this trialrecognition_probe_indices— JSON array of object indices used as probesrecognition_trials— JSON array with per-probe recognition response recordsended_reason—selection_complete|keypress_complete|timeout
DRT is scheduled explicitly in the compiled timeline using:
detection-response-task-startdetection-response-task-stop
When not overridden by the config, the runtime defaults are ISO-aligned:
- Inter-trial interval:
min_iti_ms=3000,max_iti_ms=5000 - Stimulus display:
stimulus_duration_ms=1000(hidden earlier if the participant responds) - Valid RT bounds (used for correctness only):
min_rt_ms=100,max_rt_ms=2500
The interpreter writes one buffered DRT data row per stimulus/trial (exported alongside jsPsych rows). Key fields include:
drt_trial_number(1-based within the active DRT segment)drt_rt_ms(first response RT in ms; recorded even if outside the valid bounds)drt_response_count(0 miss, 1 hit, >1 indicates extra responses / false alarms)- Absolute onset timestamps:
drt_onset_unix_msanddrt_onset_iso
Notes:
- The per-trial row is finalized at the end of the response window (or when the next DRT trial begins).
- The runtime also writes
drt_event: start|stoprows with the effectivedrt_settingsfor auditing.
The interpreter includes a custom jsPsych plugin that renders a multi-window “SOC desktop” inside a single jsPsych trial.
- Timeline
type:"soc-dashboard"(plugin:src/jspsych-soc-dashboard.js) - Compiled by:
src/timelineCompiler.js(loadswindow.jsPsychSocDashboard) - Optional global defaults: top-level
soc_dashboard_settingsis merged into each SOC Dashboard trial.
.../index.html?id=sample_soc_sart_10s&debug=1.../index.html?id=sample_soc_nback_10s&debug=1.../index.html?id=sample_soc_pvt_like_01&debug=1- Auto-sequence demo (no per-subtask schedule):
.../index.html?id=sample_soc_3tasks_sequence&soc_debug=1 - Overlap demo (scheduled windows):
.../index.html?id=sample_soc_nback_sart_overlap&debug=1
Optional SOC debug overlay:
- Add
&soc_debug=1to show additional per-window debug text inside SOC subtasks. &debug=1also enables SOC debug text.
Note: pass the config id without the .json suffix.
Implemented subtask types:
-
sart-like— log triage Go/No-Go- GO commits a triage action that is consistent for the whole run:
-
go_condition: "allow"→ GO yieldsALLOW(respond to benign entries) -
go_condition: "block"→ GO yieldsBLOCK(respond to harmful entries)
-
- Backward compatibility: legacy
target/distractorvalues are normalized toallow/blockat runtime. -
show_markers(default false) toggles target/distractor badges. -
instructionssupports placeholder substitution:{{GO_CONTROL}},{{TARGETS}},{{DISTRACTORS}}.
- GO commits a triage action that is consistent for the whole run:
-
nback-like— alert correlation ($n$ -back)match_field: "src_ip" | "username"response_paradigm: "go_nogo" | "2afc"-
instructionssupports placeholders:{{GO_CONTROL}},{{NOGO_CONTROL}},{{N}},{{MATCH_FIELD}}.
-
flanker-like— traffic spikes monitor (flanker-inspired “center vs flankers” decision)- Keys:
-
allow_key(defaultf) -
reject_key(defaultj)
-
- Timing:
-
response_window_ms(window where a response is accepted) -
trial_interval_ms(cadence) -
num_trials(optional; if provided with a scheduled duration, trials are distributed across the run)
-
- Logic:
reject_rule: "high_only" | "medium_or_high"- The “Reject?” prompt is only visible during the response window and self-heals if a render bug would otherwise leave it stuck on screen.
- Logging: responses are integrated into trial events (with RT/correctness), and late responses can be attached to the most recent just-ended trial.
- Keys:
-
wcst-like— phishing-style email sorting (WCST-inspired rule discovery + shifts)- Response mode:
response_device: "keyboard" | "mouse"- Keyboard:
choice_keys(4 keys for targets; default1,2,3,4) - Mouse:
mouse_response_mode: "click" | "drag"
- Participant support:
- Optional in-window help overlay:
help_overlay_enabled,help_overlay_title,help_overlay_html
- Optional in-window help overlay:
- Researcher-provided example libraries (optional):
- Sender identity:
sender_domains,sender_display_names - Email text:
subject_lines_neutral|urgent|reward|threat,preview_lines_neutral|urgent|reward|threat - Link/attachment labels:
link_text_*,link_href_*,attachment_label_pdf|docm|zip
- Sender identity:
- Response mode:
-
pvt-like— incident alert monitor (PVT-inspired vigilance)- Goal: respond as fast as possible when the red flash appears; early responses count as false starts.
- Parameters:
-
response_device: "keyboard" | "mouse",response_key -
countdown_seconds,flash_duration_ms,response_window_ms -
alert_min_interval_ms,alert_max_interval_ms -
show_countdown,show_red_flash
-
- Data: emits trial-level events and also writes summary stats under
subtasks_summary.pvt_like.
Each subtask can include optional timing fields to automatically show/hide the window during the SOC Dashboard trial:
start_at_msorstart_delay_msduration_ms(preferred) orend_at_ms
If any timing field is set, the window is scheduled:
- The window appears/disappears automatically based on the schedule.
- The subtask itself does not start until the participant clicks its instruction popup (if
instructionsis non-empty). This anchorst_subtask_msto a true, participant-controlled start.
SOC Dashboard data is written into the trial’s events array. Key event types include:
- Window lifecycle:
subtask_window_show,subtask_window_hide - SART-like:
sart_subtask_start,sart_present,sart_response,sart_miss,sart_subtask_end - N-back-like:
nback_subtask_start,nback_present,nback_response,nback_no_response,nback_subtask_end - Flanker-like:
flanker_subtask_start,flanker_present,flanker_response,flanker_no_response,flanker_late_response,flanker_subtask_forced_end - WCST-like:
wcst_subtask_start,wcst_present,wcst_response,wcst_omission,wcst_rule_change,wcst_subtask_forced_end - PVT-like:
pvt_like_subtask_start,pvt_like_alert_scheduled,pvt_like_countdown_start,pvt_like_flash_onset,pvt_like_response,pvt_like_false_start,pvt_like_timeout,pvt_like_subtask_auto_end,pvt_like_subtask_forced_end
The interpreter includes additional jsPsych plugins for trial-based tasks compiled from CogFlow Builder exports.
- Stroop:
.../index.html?id=sample_stroop_01&debug=1 - Emotional Stroop: export from the Builder (task type
emotional-stroop) and run via Token Store / JATOS - Simon:
.../index.html?id=sample_simon_01&debug=1 - PVT:
.../index.html?id=sample_pvt_01&debug=1 - N-back (trial-based):
.../index.html?id=sample_nback_trial_based&debug=1 - N-back (continuous):
.../index.html?id=sample_nback_continuous&debug=1
stroop-trial(plugin:src/jspsych-stroop.js)emotional-stroop-trial(plugin:src/jspsych-stroop.js, forcedresponse_mode: "color_naming")simon-trial(plugin:src/jspsych-simon.js)pvt-trial(plugin:src/jspsych-pvt.js)nback-block(plugins:src/jspsych-nback.jsfor trial-based,src/jspsych-nback-continuous.jsfor continuous)
Builder exports task-specific defaults at the top level (merged into each trial when fields are missing):
stroop_settingsemotional_stroop_settingssimon_settingspvt_settingsnback_settings
If pvt_settings.add_trial_per_false_start === true and a block generates pvt-trial, the compiler uses a jsPsych loop so the block produces the requested number of valid trials (false starts do not count toward the target).
The interpreter can optionally collect camera-based gaze estimates via WebGazer.
- Enable in config:
data_collection.eye_tracking.enabled = true(also supports legacydata_collection["eye-tracking"] = true). - Note: camera access typically requires HTTPS (or
localhost) so the browser can prompt for permission. - Flow:
- Permission/start screen is injected so the camera prompt is tied to a user gesture.
- Calibration/training is injected by default (WebGazer often returns null predictions until trained).
- If the Builder timeline includes a Calibration Instructions preface screen (tagged with
data.plugin_type = "eye-tracking-calibration-instructions"), it is automatically moved to appear between the permission screen and the calibration dots.
- Output:
- On finish, an eye-tracking payload is attached to the jsPsych data.
- If the jsPsych runtime does not allow mutating the data store safely, the interpreter falls back to appending a final extra row at export/submission time.
- The eye-tracking payload row uses
plugin_type = "eye-tracking"and includes:eye_tracking_samples_json(stringified array of gaze samples)eye_tracking_calibration_json(stringified array of calibration events)eye_tracking_stats, start/stop results, and sample counts
- Reliability: recommended to vendor a pinned copy at
vendor/webgazer.min.jsso studies don’t depend on external CDNs.- The interpreter will try
vendor/webgazer.min.jsfirst, then fall back to a pinned CDN. - Override sources via
data_collection.eye_tracking.webgazer_srcs(string array) orwebgazer_src(single string).
- The interpreter will try
- If you later want CDN-only (e.g., for a packaged distribution), set
webgazer_srcsto just the CDN URL (or removevendor/webgazer.min.js). - Licensing: WebGazer is GPL-3.0; see
vendor/THIRD_PARTY_NOTICES.mdbefore distributing builds. - Sample:
configs/sample_eye_tracking_webgazer.json
Under data_collection.eye_tracking (object form), supported settings include:
enabled(boolean)- Sampling:
sample_interval_ms(preferred; milliseconds between stored samples)sample_rate(Hz; used only ifsample_interval_msis not provided)
- Sources:
webgazer_srcs(string array) orwebgazer_src(string)
- UI:
show_video(boolean) — show/hide webcam preview box
- Calibration:
calibration_enabled(boolean; default true)calibration_points(number; default 9)calibration_key(string; default space)
- Permission prompting:
force_permission_request(boolean; default true)cam_constraints(object; passed togetUserMediawhen forcing the prompt)
- Supports both
experiment_type: "trial-based"and"continuous". blockcomponents are expanded up-front and sampled per-trial (with a special case for PVT blocks whenadd_trial_per_false_startis enabled; see above).- Adaptive/staircase blocks (e.g. QUEST) choose their next value at runtime (via
on_start) and update after each trial (viaon_finish). - Expected total scale is ~≤ 5k trials/frames.
The compiler accepts either of these parameter_windows shapes:
- Builder shape: array of objects:
{ parameter, min, max } - Legacy/alternate shape: object map:
{ "coherence": {"min": 0.2, "max": 0.8}, ... }
High-level map:
index.html: local entry (loader UI + jsPsych boot)index_jatos.html: JATOS entry wrapper (reads Component Properties, disables URL-id loading in JATOS)configs/: sample configs + local/legacy configsscripts/generate-manifest.ps1: generateconfigs/manifest.jsonwhen directory listing is unavailablesrc/main.js: orchestrationsrc/configLoader.js: loads configs (Token Store, URL mode, file upload)src/timelineCompiler.js: expands blocks + compiles to jsPsych timeline
Task/plugin implementations (selected):
-
src/drtEngine.js: DRT scheduler + buffering -
src/jspsych-continuous-image-presentation.js: CIP plugin -
src/jspsych-soc-dashboard.js: SOC Dashboard plugin -
src/jspsych-task-switching.js: Task Switching plugin -
src/eyeTrackingWebgazer.js: WebGazer integration -
src/rdmEngine.js: dot-motion renderer used by the RDM plugins -
src/jspsych-rdm.js: RDM (trial-based) -
src/jspsych-rdm-continuous.js: RDM (continuous) -
src/jspsych-flanker.js: Flanker -
src/jspsych-sart.js: SART -
src/jspsych-gabor.js: Gabor -
src/jspsych-stroop.js: Stroop + Emotional Stroop -
src/jspsych-simon.js: Simon -
src/jspsych-pvt.js: PVT -
src/jspsych-nback.js: N-back (trial-based) -
src/jspsych-nback-continuous.js: N-back (continuous) -
src/jspsych-survey-response.js: Survey response -
src/jspsych-visual-angle-calibration.js: Visual angle calibration
- Interpreter repo: https://github.com/KSalibay/json-interpreter-app
- Builder repo: https://github.com/KSalibay/json-builder-app
