Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Build LumenForge
name: Build JForge

on:
push:
Expand Down Expand Up @@ -26,15 +26,15 @@ jobs:
- label: "Universal (CPU / macOS CoreML)"
ort_artifact: onnxruntime
classifier: "-universal"
asset_name: lumenforge-universal.jar
asset_name: jforge-universal.jar

# ── NVIDIA GPU (CUDA + TensorRT) ───────────────────────────
# For Windows and Linux with NVIDIA GPUs.
# Requires CUDA toolkit + cuDNN on the host system.
- label: "NVIDIA GPU (CUDA + TensorRT)"
ort_artifact: onnxruntime_gpu
classifier: "-nvidia"
asset_name: lumenforge-nvidia.jar
asset_name: jforge-nvidia.jar

steps:
- name: Checkout
Expand All @@ -55,7 +55,7 @@ jobs:

- name: Rename artifact
run: |
JAR=$(find target -maxdepth 1 -name "lumenforge-*${{ matrix.classifier }}.jar" | head -1)
JAR=$(find target -maxdepth 1 -name "jforge-*${{ matrix.classifier }}.jar" | head -1)
cp "$JAR" "target/${{ matrix.asset_name }}"
echo "Built: ${{ matrix.asset_name }} ($(du -h target/${{ matrix.asset_name }} | cut -f1))"

Expand Down Expand Up @@ -87,5 +87,5 @@ jobs:
with:
generate_release_notes: true
files: |
artifacts/lumenforge-universal.jar
artifacts/lumenforge-nvidia.jar
artifacts/jforge-universal.jar
artifacts/jforge-nvidia.jar
8 changes: 4 additions & 4 deletions .vscode/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"version": "2.0.0",
"tasks": [
{
"label": "Build LumenForge",
"label": "Build JForge",
"type": "shell",
"command": "mvn",
"args": [
Expand All @@ -14,7 +14,7 @@
"group": "build"
},
{
"label": "Run LumenForge",
"label": "Run JForge",
"type": "shell",
"command": "mvn",
"args": [
Expand All @@ -30,7 +30,7 @@
}
},
{
"label": "Package LumenForge",
"label": "Package JForge",
"type": "shell",
"command": "mvn",
"args": [
Expand All @@ -41,7 +41,7 @@
"isBackground": false
},
{
"label": "Clean LumenForge",
"label": "Clean JForge",
"type": "shell",
"command": "mvn",
"args": [
Expand Down
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
# LumenForge
# JForge

[![Build LumenForge](https://github.com/palaashatri/lumenforge/actions/workflows/build.yml/badge.svg)](https://github.com/palaashatri/lumenforge/actions/workflows/build.yml)
[![Build JForge](https://github.com/palaashatri/jforge/actions/workflows/build.yml/badge.svg)](https://github.com/palaashatri/jforge/actions/workflows/build.yml)

Desktop Java Swing application for ONNX Runtime inference with intelligent GPU acceleration across NVIDIA, Apple, Intel, and AMD hardware.

<img width="2552" height="2026" alt="image" src="https://github.com/user-attachments/assets/8b7a8783-c795-4a2f-844c-beccc9d6855d" />

Comment on lines +3 to 7
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README screenshot/image tag was removed as part of this rename-only PR. If that wasn’t intentional, consider restoring it (or replacing it with an updated screenshot) so the README still has a visual preview of the app.

Copilot uses AI. Check for mistakes.
## Features

Expand Down Expand Up @@ -33,18 +32,18 @@ Desktop Java Swing application for ONNX Runtime inference with intelligent GPU a
- **PyTorch → ONNX auto-conversion**: downloading a PyTorch model triggers automatic conversion via managed Python venv
- Manual ONNX model import
- Gated model support with HuggingFace token authentication
- Local model storage in `~/.lumenforge-models`
- Local model storage in `~/.jforge-models`

### GPU Acceleration
Intelligent execution provider selection — LumenForge probes available EPs at runtime and picks the best one:
Intelligent execution provider selection — JForge probes available EPs at runtime and picks the best one:

| Platform | Priority (highest → lowest) |
|---|---|
| **macOS** | CoreML (GPU + ANE + CPU) → CPU |
| **Windows** | TensorRT-RTX → TensorRT → CUDA → DirectML → OpenVINO → CPU |
| **Linux** | TensorRT → CUDA → ROCm → OpenVINO → CPU |

Override with `-Dlumenforge.ep=cuda` (or any EP key) to force a specific provider.
Override with `-Djforge.ep=cuda` (or any EP key) to force a specific provider.

### UI
- Native look-and-feel: system-native on macOS, FlatLaf with dark/light detection on Windows/Linux
Expand All @@ -54,19 +53,19 @@ Override with `-Dlumenforge.ep=cuda` (or any EP key) to force a specific provide

## Downloads

Pre-built fat JARs are available from [GitHub Releases](https://github.com/palaashatri/lumenforge/releases):
Pre-built fat JARs are available from [GitHub Releases](https://github.com/palaashatri/jforge/releases):

| JAR | GPU Support | Use When |
|---|---|---|
| `lumenforge-universal.jar` | macOS CoreML (M-series GPU/ANE), CPU everywhere | macOS, or Windows/Linux without NVIDIA GPU |
| `lumenforge-nvidia.jar` | CUDA + TensorRT (Windows/Linux) | Windows/Linux with NVIDIA GPU + CUDA installed |
| `jforge-universal.jar` | macOS CoreML (M-series GPU/ANE), CPU everywhere | macOS, or Windows/Linux without NVIDIA GPU |
| `jforge-nvidia.jar` | CUDA + TensorRT (Windows/Linux) | Windows/Linux with NVIDIA GPU + CUDA installed |

> **Note**: DirectML (AMD/Intel on Windows), OpenVINO (Intel), and ROCm (AMD on Linux) are auto-detected at runtime if the native libraries are installed on the system. The universal JAR handles this automatically.

```bash
# Run any variant
java -jar lumenforge-universal.jar
java -jar lumenforge-nvidia.jar
java -jar jforge-universal.jar
java -jar jforge-nvidia.jar
```

## Build from Source
Expand Down Expand Up @@ -112,6 +111,6 @@ See [.github/workflows/build.yml](.github/workflows/build.yml) for details.
## Notes

- Runtime is pure Java — ONNX Runtime execution with GPU fallback. No Python bridge needed at inference time.
- Override EP order via JVM property: `-Dlumenforge.ep=cpu|coreml|cuda|tensorrt|directml|openvino|rocm`
- Override EP order via JVM property: `-Djforge.ep=cpu|coreml|cuda|tensorrt|directml|openvino|rocm`
- Task tabs show preview images when output is generated, with **Open Output** to launch the file.
- If a model requires external tensor files (e.g. `weights.pb`), import the complete ONNX bundle into the model directory.
12 changes: 6 additions & 6 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>atri.palaash.lumenforge</groupId>
<artifactId>lumenforge</artifactId>
<groupId>atri.palaash.jforge</groupId>
<artifactId>jforge</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>LumenForge</name>
<name>JForge</name>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
Expand Down Expand Up @@ -59,7 +59,7 @@
<configuration>
<archive>
<manifest>
<mainClass>atri.palaash.lumenforge.app.LumenForgeApp</mainClass>
<mainClass>atri.palaash.jforge.app.JForgeApp</mainClass>
</manifest>
</archive>
</configuration>
Expand All @@ -77,7 +77,7 @@
<finalName>${project.artifactId}-${project.version}${jar.classifier}</finalName>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>atri.palaash.lumenforge.app.LumenForgeApp</mainClass>
<mainClass>atri.palaash.jforge.app.JForgeApp</mainClass>
</transformer>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
Expand All @@ -100,7 +100,7 @@
<artifactId>exec-maven-plugin</artifactId>
<version>3.5.0</version>
<configuration>
<mainClass>atri.palaash.lumenforge.app.LumenForgeApp</mainClass>
<mainClass>atri.palaash.jforge.app.JForgeApp</mainClass>
</configuration>
</plugin>
<plugin>
Expand Down
6 changes: 3 additions & 3 deletions scripts/convert_pytorch_to_onnx.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
PyTorch → ONNX converter for LumenForge.
PyTorch → ONNX converter for JForge.

Converts HuggingFace diffusers pipelines (Stable Diffusion, SDXL, etc.)
or generic PyTorch models (.pt / .pth) to ONNX format for inference
Expand All @@ -9,7 +9,7 @@
Usage — diffusers pipeline (recommended for SD models):
python convert_pytorch_to_onnx.py \
--model_id runwayml/stable-diffusion-v1-5 \
--output_dir ~/.lumenforge-models/text-image/converted/sd-v15
--output_dir ~/.jforge-models/text-image/converted/sd-v15

Usage — generic PyTorch model (per Microsoft tutorial):
python convert_pytorch_to_onnx.py \
Expand Down Expand Up @@ -193,7 +193,7 @@ def convert_generic(model_path, output_dir, input_shape_str):

def main():
ap = argparse.ArgumentParser(
description="LumenForge PyTorch → ONNX converter"
description="JForge PyTorch → ONNX converter"
)
ap.add_argument(
"--model_id", required=True,
Expand Down
10 changes: 5 additions & 5 deletions scripts/export_torchscript.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/usr/bin/env python3
"""
Export Stable Diffusion v1.5 components to TorchScript format for LumenForge's
Export Stable Diffusion v1.5 components to TorchScript format for JForge's
DJL/PyTorch backend.

Usage:
Expand All @@ -9,7 +9,7 @@

Defaults:
--model_id stabilityai/stable-diffusion-2-1 (or runwayml/stable-diffusion-v1-5)
--output_dir ~/.lumenforge-models/text-image/sd-pytorch
--output_dir ~/.jforge-models/text-image/sd-pytorch

Produces:
clip_model.pt — traced CLIP text encoder
Expand All @@ -28,15 +28,15 @@


def main():
parser = argparse.ArgumentParser(description="Export SD to TorchScript for LumenForge")
parser = argparse.ArgumentParser(description="Export SD to TorchScript for JForge")
parser.add_argument(
"--model_id",
default="runwayml/stable-diffusion-v1-5",
help="HuggingFace model ID (default: runwayml/stable-diffusion-v1-5)"
)
parser.add_argument(
"--output_dir",
default=os.path.expanduser("~/.lumenforge-models/text-image/sd-pytorch"),
default=os.path.expanduser("~/.jforge-models/text-image/sd-pytorch"),
help="Output directory for TorchScript files"
)
parser.add_argument(
Expand Down Expand Up @@ -146,7 +146,7 @@ def main():
print(f" Output directory: {output}")
print()
print("Next steps:")
print(" 1. Build LumenForge with DJL: mvn compile -Ddjl=true")
print(" 1. Build JForge with DJL: mvn compile -Ddjl=true")
print(" 2. Run: mvn exec:java -Ddjl=true")
print(' 3. Select "SD v1.5 PyTorch (DJL)" from the model dropdown')
print()
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
package atri.palaash.lumenforge.app;
package atri.palaash.jforge.app;

import atri.palaash.lumenforge.inference.InferenceService;
import atri.palaash.lumenforge.inference.ServiceFactory;
import atri.palaash.lumenforge.model.ModelRegistry;
import atri.palaash.lumenforge.model.TaskType;
import atri.palaash.lumenforge.storage.ModelDownloader;
import atri.palaash.lumenforge.storage.ModelStorage;
import atri.palaash.lumenforge.ui.MainFrame;
import atri.palaash.lumenforge.ui.NativeLookAndFeel;
import atri.palaash.jforge.inference.InferenceService;
import atri.palaash.jforge.inference.ServiceFactory;
import atri.palaash.jforge.model.ModelRegistry;
import atri.palaash.jforge.model.TaskType;
import atri.palaash.jforge.storage.ModelDownloader;
import atri.palaash.jforge.storage.ModelStorage;
import atri.palaash.jforge.ui.MainFrame;
import atri.palaash.jforge.ui.NativeLookAndFeel;

import javax.swing.SwingUtilities;
import java.net.http.HttpClient;
Expand All @@ -17,14 +17,14 @@
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class LumenForgeApp {
public class JForgeApp {

public static void main(String[] args) {
configureDesktopIntegration();

System.out.println("[LumenForge] Starting \u2014 Java " + Runtime.version()
System.out.println("[JForge] Starting \u2014 Java " + Runtime.version()
+ ", " + System.getProperty("os.name") + " " + System.getProperty("os.arch"));
System.out.println("[LumenForge] Max heap: " + (Runtime.getRuntime().maxMemory() / (1024 * 1024)) + " MB"
System.out.println("[JForge] Max heap: " + (Runtime.getRuntime().maxMemory() / (1024 * 1024)) + " MB"
+ ", CPUs: " + Runtime.getRuntime().availableProcessors());

ExecutorService workerPool = Executors.newVirtualThreadPerTaskExecutor();
Expand Down Expand Up @@ -54,7 +54,7 @@ private static void configureDesktopIntegration() {
String osName = System.getProperty("os.name", "").toLowerCase();
if (osName.contains("mac")) {
System.setProperty("apple.laf.useScreenMenuBar", "true");
System.setProperty("apple.awt.application.name", "LumenForge");
System.setProperty("apple.awt.application.name", "JForge");
System.setProperty("apple.awt.application.appearance", "system");
// Enable FlatLaf embedded title bar on macOS for a unified toolbar look
System.setProperty("flatlaf.menuBarEmbedded", "true");
Expand Down
Loading