Skip to content

CoreThreadScheduling

Levente Santha edited this page May 11, 2026 · 1 revision

CoreThreadScheduling

Hybrid preemptive/cooperative thread scheduler with yieldpoints and TSI.

Overview

JNode implements a hybrid preemptive/cooperative multithreading system that balances responsiveness with efficiency. The scheduler combines hardware timer interrupts (preemptive) with compiler-inserted yieldpoints (cooperative) to achieve optimal thread management for a Java operating system.

The preemptive component ensures no thread monopolizes the CPU for extended periods, while the cooperative component (yieldpoints) allows threads to voluntarily yield before their timeslice expires, improving responsiveness for I/O-bound tasks. This design minimizes context-switch overhead while maintaining system responsiveness.

Key Components

Class / File Role
core/src/core/org/jnode/vm/scheduler/TSI.java Thread state indicator constants
core/src/core/org/jnode/vm/scheduler/ThreadScheduler.java Global scheduler with priority queues
core/src/core/org/jnode/vm/scheduler/Dispatcher.java Thread dispatcher and context switching
core/src/core/org/jnode/vm/scheduler/VmProcessor.java Per-CPU state and TSI management
core/src/core/org/jnode/vm/scheduler/VmThread.java Thread state and execution context
core/src/core/org/jnode/vm/scheduler/VmThreadQueue.java Priority/time-sorted thread queues
core/src/native/x86/vm-ints.asm Assembly handlers for yieldpoints and timer

How It Works

Hybrid Scheduling Architecture

The scheduler combines two mechanisms:

  1. Preemptive Component: Hardware timer interrupts (PIT/APIC) periodically set the TSI_SWITCH_NEEDED flag, forcing thread switches. Frequency is configurable, typically every few milliseconds.

  2. Cooperative Component: Yieldpoints are compiler-inserted checks in compiled Java code. When a thread reaches a yieldpoint and the TSI indicates a switch is needed, it voluntarily yields the CPU.

TSI (Thread State Indicator)

The TSI is a per-processor flag word (Word type for atomic operations):

Flag Value Purpose
TSI_SWITCH_NEEDED 0x0001 Thread switch requested
TSI_SYSTEM_READY 0x0002 System init complete, switches allowed
TSI_SWITCH_ACTIVE 0x0004 Thread switch in progress
TSI_BLOCK_SWITCH 0x0008 Block all switches (GC, monitors)

Atomic operations use Word.atomicOr() and Word.atomicAnd() for safe state transitions.

Yieldpoint Flow

  1. Compiled code tests TSI against TSI_SWITCH_REQUESTED (0x0003)
  2. If flags match, triggers software interrupt YIELDPOINT_INTNO
  3. Assembly handler saves current thread registers
  4. Calls VmProcessor.reschedule()
  5. Selects highest-priority ready thread
  6. Restores new thread registers and resumes

Reschedule Process

In VmProcessor.reschedule():

  1. Process kernel debugger input (if enabled)
  2. Dispatch pending interrupts via IRQ manager
  3. Re-queue current thread if still in RUNNING state
  4. Wake up sleeping threads whose wakeup time has passed
  5. Select highest-priority ready thread from queue
  6. If no thread available, run the idle thread

Gotchas & Non-Obvious Behavior

  1. Yieldpoint placement: Compiler inserts yieldpoints at method entries, loop headers, and back-edges. Conditional yieldpoints appear every ~1000 loop iterations to balance overhead vs responsiveness.

  2. @Uninterruptible restriction: Methods annotated with @Uninterruptible skip yieldpoint insertion for performance, critical for VM internals.

  3. TSI atomicity requirement: State transitions MUST use atomic operations. Non-atomic updates can cause race conditions in multi-CPU systems.

  4. TSI_BLOCK_SWITCH usage: Used during garbage collection and monitor operations to prevent unexpected context switches at critical points.

  5. FPU/XMM lazy save/restore: Floating-point state is saved/restored lazily to reduce context-switch overhead for threads that don't use FPU.

  6. Stack overflow detection: Limited slots (256) before stack boundary; overflow handling requires careful assembly-level checks.

  7. Idle thread fallback: When no ready threads exist, the idle thread runs to prevent CPU starvation.

Related Pages

Clone this wiki locally