-
Notifications
You must be signed in to change notification settings - Fork 0
JIT Compilers
JNode features a tiered Just-In-Time (JIT) compiler pipeline for the x86 architecture, consisting of a fast, stack-based L1 compiler and an optimizing L2 compiler.
Unlike standard JVMs that execute bytecode via interpretation initially, JNode compiles all bytecode to native machine code before execution. There is no bytecode interpreter. To balance startup time and performance, JNode uses a tiered compilation strategy:
- L1 (Level 1) Compilers: Fast, non-optimizing compilers used for most methods to ensure quick startup.
- L2 (Level 2) Compiler: An optimizing compiler used for performance-critical methods.
The JIT compilers are written entirely in Java and run within the JNode VM itself. They translate VmByteCode into native x86 instructions which are then executed directly by the processor.
-
Location:
core/src/core/org/jnode/vm/x86/compiler/l1a/andl1b/ - Characteristics: Fast compilation, low memory footprint.
-
Mechanism: These compilers emulate the Java operand stack using a "Virtual Stack" (
VirtualStack.java) or "Item Stack". They translate bytecodes almost directly into native x86 instructions without building an Intermediate Representation (IR) or performing complex register allocation. Registers are assigned greedily as items are pushed and popped. -
Inlining: Both L1A and L1B support method inlining, driven by the
OptimizingBytecodeVisitorand hinted by annotations like@Inline.
-
Location:
core/src/core/org/jnode/vm/x86/compiler/l2/ - Characteristics: Slower compilation, higher memory footprint, but generates highly optimized native code.
-
Mechanism: The L2 compiler is a modern, optimizing JIT compiler. Its pipeline involves:
-
IR Generation: Translates bytecode into an Intermediate Representation (
IRControlFlowGraph). -
SSA Construction: Converts the IR into Static Single Assignment form (
cfg.constructSSA()). -
Optimization passes: Performs optimizations on the SSA graph (
cfg.optimize(),cfg.removeUnusedVars()). -
De-SSA: Reverts SSA form (
cfg.deconstrucSSA(),cfg.removeDefUseChains()). -
Register Allocation: Uses a Linear Scan Allocator (
LinearScanAllocator.java) based on computed Live Ranges. -
Code Generation: Emits native x86 code from the optimized IR (
X86CodeGenerator.java).
-
IR Generation: Translates bytecode into an Intermediate Representation (
| Class / Package | Role |
|---|---|
org.jnode.vm.compiler |
Base classes for the compiler framework (CompiledMethod, CompilerBytecodeVisitor). |
org.jnode.vm.x86.compiler |
x86-specific base classes and helpers (AbstractX86Compiler, X86CompilerHelper). |
org.jnode.vm.compiler.ir |
Intermediate Representation used by the L2 compiler. |
org.jnode.vm.x86.compiler.l2.LinearScanAllocator |
Register allocator for the L2 compiler. |
org.jnode.assembler.x86 |
The x86 assembler framework (X86BinaryAssembler) used to emit raw bytes. |
-
Build/Boot Time:
BootImageBuilderuses the L1 compiler running on the host JVM to pre-compile the core classes required for the system to boot. These compiled methods are baked into the binary boot image. - Runtime: When a class is loaded dynamically, or a method is executed for the first time (if lazily compiled), the VM invokes the JIT compiler running on the target OS.
The compilers interact deeply with JNode's VmMagic framework. MagicHelper classes within the compiler packages intercept calls to magic methods (like Unsafe.getAddress()) and directly emit the corresponding native instructions instead of emitting an actual method call. This is crucial for low-level performance.
- L1-Compiler-Deep-Dive — Detailed walkthrough of the VirtualStack, Item state machine, register allocation, and optimization limits
- L2-Compiler-Deep-Dive — SSA-based IR, linear-scan register allocator, optimization pipeline, and x86 code generation
- VM-Magic — How magic annotations interact with the JIT.
-
Code-Conventions — Explains compiler-related annotations like
@Inlineand@NoOptCompilePragma. -
Build-System — How
BootImageBuilderpre-compiles code.