Skip to content

feat(workspace): update workspace layout and implement functional logic#13

Draft
ErenayDev wants to merge 23 commits intomainfrom
develop
Draft

feat(workspace): update workspace layout and implement functional logic#13
ErenayDev wants to merge 23 commits intomainfrom
develop

Conversation

@ErenayDev
Copy link
Owner

@ErenayDev ErenayDev commented Feb 27, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Multi-crate workspace architecture for improved project organization
    • Automated code transformation and checkpointing system
    • Discord Rich Presence integration for presence awareness
    • Execution history tracking and visualization
    • Inter-process communication bridge for runtime interaction
  • Documentation

    • Added comprehensive installation guide with platform-specific instructions
    • Added release process documentation
    • Streamlined README with focus on key features and links to detailed docs
  • Infrastructure

    • Implemented automated CI/CD workflows for testing, releases, and documentation deployment
    • Added standardized GitHub issue templates for bug reports, feature requests, and documentation updates
    • Configured distribution settings for cross-platform builds

- Convert root Cargo.toml to workspace configuration
- Define workspace members: core, parser, shared
- Add shared workspace dependencies (swc_core, serde, shared_memory)
- Update workspace metadata and package information
Remove legacy single-crate source files in preparation for workspace migration
- Remove cargo-bin development tool
- Delete custom cargo alias configuration
- Simplify development workflow
Clean up temporary index.ts and package.json files from root directory
Move PR template to standard GitHub location for better discoverability
Update gitignore for Node.js runtime integration support
- Implement FFI-based shared memory communication layer
- Add checkpoint runtime for Node/Bun integration
- Include transform examples with working demo project
- Add session management and transform caching
Temporarily disable end-of-file-fixer to resolve formatting conflicts
@coderabbitai
Copy link

coderabbitai bot commented Feb 27, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch develop

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/ci.yml (1)

6-8: ⚠️ Potential issue | 🟠 Major

CI path filter likely misses workspace crate changes.

The filter watches src/**, but this PR moves logic into workspace crates. CI may not trigger for crates/** edits.

Suggested fix
   pull_request:
     paths:
-      - "src/**"
+      - "src/**"
+      - "crates/**"
       - "Cargo.toml"
       - "Cargo.lock"
       - ".github/workflows/**"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 6 - 8, The CI path filter currently
watches "src/**", "Cargo.toml", and "Cargo.lock" but will miss changes moved
into workspace crates; update the workflow's paths filter to also include
"crates/**" (and any nested crate manifests such as crates/*/Cargo.toml and
crates/*/Cargo.lock) so edits in workspace crates trigger CI — replace or extend
the existing "src/**" entry with "crates/**" (and add patterns like
"crates/*/Cargo.toml" / "crates/*/Cargo.lock") in the same paths list.
🟡 Minor comments (19)
TODO.md-1-3 (1)

1-3: ⚠️ Potential issue | 🟡 Minor

Use valid Markdown task-list syntax.

These won’t render as interactive task items in many Markdown viewers. Use - [ ] / - [x].

Suggested fix
-[ ] JSX/TSX support
-[+] Discord RPC
-[ ] Settings page or ~/.config/checkpoint/config.yml
+- [ ] JSX/TSX support
+- [x] Discord RPC
+- [ ] Settings page or ~/.config/checkpoint/config.yml
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TODO.md` around lines 1 - 3, The Markdown checklist items in TODO.md use
plain bracket notation like "[ ]" which doesn’t render as task list items;
replace each line such as "[ ] JSX/TSX support", "[+] Discord RPC", and "[ ]
Settings page or ~/.config/checkpoint/config.yml" with GitHub-style task list
syntax using "- [ ]" for unchecked and "- [x]" for checked (convert the current
"[+]" to "- [x]" if meant checked) so they render as interactive task items in
Markdown viewers.
examples/transform_examples/.gitignore-15-16 (1)

15-16: ⚠️ Potential issue | 🟡 Minor

Fix log/report ignore patterns (typo in wildcards).

_.log and the report pattern look malformed and may fail to ignore expected files.

Suggested fix
-_.log
-report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
+*.log
+report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/transform_examples/.gitignore` around lines 15 - 16, Replace the
malformed gitignore patterns: change "_.log" to "*.log" so all log files are
ignored, and fix the report pattern "report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json" to
use the intended separators and wildcards (e.g.,
"report.[0-9]_[0-9]_[0-9]_[0-9].json") so numeric report files are properly
matched.
examples/transform_examples/index.ts-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Missing patch() method.

HttpMethod includes "PATCH" but no corresponding patch() method is exposed. Either add the method or remove "PATCH" from the type.

Suggested addition after delete()
  patch<T>(
    endpoint: string,
    body: unknown,
    config?: RequestConfig,
  ): Promise<ApiResponse<T>> {
    return this.request<T>("PATCH", endpoint, { ...config, body });
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/transform_examples/index.ts` at line 1, The HttpMethod union
includes "PATCH" but the client class doesn't expose a corresponding patch()
helper; add a patch method on the same class that exposes delete(), e.g.
implement patch<T>(endpoint: string, body: unknown, config?: RequestConfig):
Promise<ApiResponse<T>> that forwards to the existing request<T>("PATCH",
endpoint, { ...config, body }); reference the HttpMethod type, the request()
method, RequestConfig and ApiResponse so callers can use the new patch helper
consistently with other verbs.
.github/workflows/web.yml-57-61 (1)

57-61: ⚠️ Potential issue | 🟡 Minor

Update actions/checkout to v4.

The actions/checkout@v3 action is outdated. GitHub recommends using v4 for better performance and Node.js 20 support.

Suggested fix
-      - uses: actions/checkout@v3
+      - uses: actions/checkout@v4
         with:
           fetch-depth: 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/web.yml around lines 57 - 61, Replace the outdated
actions/checkout@v3 reference with actions/checkout@v4 in the workflow;
specifically update the uses line that currently reads "actions/checkout@v3" to
"actions/checkout@v4" (preserving the existing with: fetch-depth: 0 block) so
the workflow uses the newer checkout action with improved performance and
Node.js 20 support.
oranda.json-27-39 (1)

27-39: ⚠️ Potential issue | 🟡 Minor

AUR package name inconsistency.

The AUR package is listed as checkpoint while all other package managers use checkpoint-ts. This inconsistency may confuse users trying to find or install the package.

Suggested fix
         "additional": {
           "binstall": "cargo binstall checkpoint-ts",
-          "aur": "pacman -S checkpoint",
+          "aur": "pacman -S checkpoint-ts",
           "nix-env": "nix-env -i checkpoint-ts",
           "nix flake": "nix profile install github:ErenayDev/checkpoint-ts"
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@oranda.json` around lines 27 - 39, Update the AUR package name to match the
others by changing the value under artifacts.package_managers.additional.aur
(currently "pacman -S checkpoint") to use "checkpoint-ts" (e.g., "pacman -S
checkpoint-ts"); ensure the package string aligns with the preferred and other
additional entries so users find the same package across managers.
doc/RELEASE.md-3-5 (1)

3-5: ⚠️ Potential issue | 🟡 Minor

Broken relative links to root files.

The links reference files as if they're in the same directory, but this file is in doc/. Users clicking these links will get 404 errors.

Suggested fix
-1. Bump the version in [Cargo.toml](Cargo.toml) according to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-2. Update [Cargo.lock](Cargo.lock) by building the project: `cargo build`
-3. Ensure [CHANGELOG.md](CHANGELOG.md) is updated according to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format. [git-cliff](https://github.com/orhun/git-cliff) recommended.
+1. Bump the version in [Cargo.toml](../Cargo.toml) according to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+2. Update [Cargo.lock](../Cargo.lock) by building the project: `cargo build`
+3. Ensure [CHANGELOG.md](../CHANGELOG.md) is updated according to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format. [git-cliff](https://github.com/orhun/git-cliff) recommended.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/RELEASE.md` around lines 3 - 5, Update the three broken relative links in
doc/RELEASE.md so they point to the repository root instead of the current doc/
directory: change the link targets for [Cargo.toml], [Cargo.lock], and
[CHANGELOG.md] to either "../Cargo.toml", "../Cargo.lock", "../CHANGELOG.md" or
to repository-root absolute paths (e.g., "/Cargo.toml") so clicks from
doc/RELEASE.md resolve correctly.
crates/core/src/utils/discord_rpc.rs-283-301 (1)

283-301: ⚠️ Potential issue | 🟡 Minor

Add a maximum length check to prevent memory exhaustion.

The length value read from the socket is used directly to allocate a buffer without validation. A malformed or malicious response could specify an extremely large length, causing memory exhaustion.

🛡️ Proposed fix to add length validation
 fn recv(&mut self) -> io::Result<(u32, serde_json::Value)> {
+    const MAX_PAYLOAD_SIZE: usize = 64 * 1024; // 64 KB should be plenty for Discord RPC
+    
     let mut header = [0u8; 8];
     self.connection.read_exact(&mut header)?;
     let opcode = u32::from_le_bytes(header[..4].try_into().unwrap());
     let length = u32::from_le_bytes(header[4..8].try_into().unwrap()) as usize;
+    
+    if length > MAX_PAYLOAD_SIZE {
+        return Err(io::Error::new(
+            io::ErrorKind::InvalidData,
+            format!("payload too large: {} bytes", length),
+        ));
+    }
+    
     let mut buf = vec![0u8; length];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/utils/discord_rpc.rs` around lines 283 - 301, The recv
function reads a length from the header and allocates buf without bounds
checking; add a maximum allowed payload size (e.g. MAX_PAYLOAD constant) and
validate the parsed length before allocating buf in recv, returning an io::Error
(InvalidData or Other) when length is zero or exceeds the max to avoid memory
exhaustion; keep the rest of the logic (reading buf, deserializing JSON,
handling OpCode::Close) unchanged and reference the variables header, length,
buf and the recv method/OpCode::Close to locate the change.
doc/INSTALLATION.md-13-13 (1)

13-13: ⚠️ Potential issue | 🟡 Minor

Use formal language.

-## If you wanna build from source
+## If you want to build from source
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/INSTALLATION.md` at line 13, Replace the informal heading "If you wanna
build from source" with a formal alternative such as "Building from source" or
"How to build from source"; update the heading text in doc/INSTALLATION.md (the
line containing "If you wanna build from source") to use proper formal language
and capitalization.
doc/INSTALLATION.md-29-29 (1)

29-29: ⚠️ Potential issue | 🟡 Minor

Hyphenate compound modifier.

-## Platform specific downloads
+## Platform-specific downloads
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/INSTALLATION.md` at line 29, The heading "Platform specific downloads"
should hyphenate the compound modifier; update the Markdown heading in
doc/INSTALLATION.md (the line that currently reads "## Platform specific
downloads") to "## Platform-specific downloads" so it reads correctly as a
compound modifier.
doc/INSTALLATION.md-3-6 (1)

3-6: ⚠️ Potential issue | 🟡 Minor

Fix grammar and style issues in the documentation.

Several issues affect readability:

  • Line 3: "If you wanna just downloading" → "If you want to download"
  • Line 5: "Github" → "GitHub"
  • Line 6: "wanna" → "want to"
📝 Suggested fixes
-## If you wanna just downloading the pre-compiled binary (Recommended)
+## If you want to download the pre-compiled binary (Recommended)

-1- Go to [Github Releases](https://github.com/ErenayDev/checkpoint-ts/releases) page and download your platforms file.
-2- If you wanna download pre-compiled binary with [cargo-binstall](https://github.com/cargo-bins/cargo-binstall):
+1. Go to [GitHub Releases](https://github.com/ErenayDev/checkpoint-ts/releases) page and download your platform's file.
+2. If you want to download the pre-compiled binary with [cargo-binstall](https://github.com/cargo-bins/cargo-binstall):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@doc/INSTALLATION.md` around lines 3 - 6, The heading "## If you wanna just
downloading the pre-compiled binary (Recommended)" and the list items use
informal/incorrect phrasing; update the heading to "## If you want to download
the pre-compiled binary (Recommended)" and change the list items to use proper
terms: replace "Github" with "GitHub" and "wanna" with "want to" (e.g., "If you
want to download pre-compiled binary with cargo-binstall:"). Ensure consistent
capitalization and formal tone across that section.
runtime/index.ts-114-117 (1)

114-117: ⚠️ Potential issue | 🟡 Minor

Add break after process.exit(0) to fix switch fallthrough.

While process.exit(0) terminates execution, the missing break statement triggers a linter warning and could cause issues if the exit behavior changes. Adding break makes the intent explicit.

Proposed fix
       case "shutdown":
         console.error(`[DEBUG] Shutdown requested`);
         shm.close();
         process.exit(0);
+        break;
 
       default:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/index.ts` around lines 114 - 117, In the switch handling the
"shutdown" case (the block that calls shm.close() and process.exit(0)), add a
trailing break; after process.exit(0) to prevent switch fallthrough and silence
the linter; update the case "shutdown": block (where shm.close() and
process.exit(0) are invoked) to include break; as the last statement.
crates/core/src/main.rs-87-90 (1)

87-90: ⚠️ Potential issue | 🟡 Minor

Potential panic on unwrap() if path has no filename.

entry_file.file_name() returns None for paths like ".." or /. While unlikely in practice, this could cause a panic.

Proposed fix
     let transformed_entry: PathBuf = ctx
         .checkpoint_dir
         .join("transforms")
-        .join(entry_file.file_name().unwrap());
+        .join(
+            entry_file
+                .file_name()
+                .ok_or_else(|| eyre::eyre!("Invalid entry file path: no filename"))?,
+        );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/main.rs` around lines 87 - 90, Replace the unwrap on
entry_file.file_name() with explicit handling: call entry_file.file_name() and
if it returns None return or propagate a descriptive error (or skip/log the
entry) instead of panicking, then use the obtained OsStr for building
transformed_entry (i.e., change the transformed_entry construction to use the
checked filename value obtained via ok_or_else/ok_or or a match and
propagate/handle the error accordingly).
runtime/index.ts-109-112 (1)

109-112: ⚠️ Potential issue | 🟡 Minor

Validate payload before casting to CheckpointPayload.

The message.payload is cast to CheckpointPayload without validation. If the payload is malformed or missing required fields (functionName, args), handleCheckpoint could fail unexpectedly.

Proposed fix
       case "checkpoint":
         console.error(`[DEBUG] Handling checkpoint`);
-        handleCheckpoint(message.payload as CheckpointPayload);
+        const payload = message.payload as CheckpointPayload;
+        if (!payload?.functionName || !Array.isArray(payload?.args)) {
+          shm.writeJson({ type: "error", message: "Invalid checkpoint payload" });
+          break;
+        }
+        handleCheckpoint(payload);
         break;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/index.ts` around lines 109 - 112, Before calling handleCheckpoint,
validate message.payload instead of directly casting to CheckpointPayload:
implement or use a type guard (e.g., isCheckpointPayload(payload)) that checks
presence and types of required fields like functionName and args, then in the
"checkpoint" case (where message.payload is currently cast) call
handleCheckpoint only if validation passes; otherwise log an error/throw or
ignore the message to avoid passing malformed data into handleCheckpoint.
README.md-18-23 (1)

18-23: ⚠️ Potential issue | 🟡 Minor

Improve grammar and clarity in Key Features section.

The feature descriptions have several grammatical issues that affect readability:

  • Line 18: "first parses" → "it first parses"
  • Line 19: "Then injects" → "Then it injects"; "appears to you view" → unclear
  • Line 20: "wanna checkpoint" → "want to set a checkpoint"; "written to cache" → "are written to cache"
  • Line 21: "wanna run" → "want to run"
  • Line 23: "whatever you want. really." → more professional phrasing
Suggested revision
-When you run program, first parses your code with AST's and injects checkpointing functions.
-Then injects runtime codes and then a TUI appears to you view.
-When you wanna checkpoint in a point, the function name and its parameters, variables written to cache(in your .checkpoint folder)
-If you wanna run the checkpointed function, just do it. You can edit the parameters, variables for the function.
-You can profile execution times for each function.
-Also you can edit whatever you want. really. try it yourself.
+When you run the program, it first parses your code using ASTs and injects checkpointing functions.
+Then it injects runtime code, and a TUI appears for you to interact with.
+When you set a checkpoint, the function name along with its parameters and variables are written to the cache (in your `.checkpoint` folder).
+To run the checkpointed function, simply do so—you can edit the parameters and variables beforehand.
+You can also profile execution times for each function.
+Edit anything you want—try it yourself!
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 18 - 23, The Key Features section in README.md
contains grammatical errors and unclear phrasing; update the sentences to be
clear and professional by: changing "When you run program, first parses your
code with AST's and injects checkpointing functions." to a complete sentence
like "When you run the program, it first parses your code using the AST and
injects checkpointing functions."; changing "Then injects runtime codes and then
a TUI appears to you view." to "It then injects runtime code and displays a
text-based user interface (TUI)."; replacing informal "wanna checkpoint" /
"wanna run" with "want to set a checkpoint" and "want to run the checkpointed
function", and make "variables written to cache(in your .checkpoint folder)"
into "the function name, parameters, and variables are written to cache (in the
.checkpoint folder)"; also reword "You can edit whatever you want. really. try
it yourself." to a professional line like "You can edit parameters and variables
before re-running checkpoints, and profile execution times for each function."
Ensure these edits are applied in the README Key Features section.
crates/core/Cargo.toml-14-14 (1)

14-14: ⚠️ Potential issue | 🟡 Minor

Update README.md badge to match Cargo.toml rust-version requirement.

The README.md shows "rust-1.85+" but crates/core/Cargo.toml specifies rust-version = "1.87". Update the badge to reflect the actual minimum supported version: "rust-1.87+".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/Cargo.toml` at line 14, Update the README.md Rust version badge
to match the Cargo.toml rust-version value; change the badge text from
"rust-1.85+" to "rust-1.87+" so it aligns with the `rust-version = "1.87"`
setting in crates/core/Cargo.toml and keep the badge formatting intact.
crates/core/src/services/transformer.rs-58-63 (1)

58-63: ⚠️ Potential issue | 🟡 Minor

Multiple .unwrap() calls on path operations can panic on non-UTF-8 paths.

The calls to .to_str().unwrap() on lines 60 and 63 will panic if the path contains non-UTF-8 characters. While rare on most systems, this can occur with certain filenames.

🛡️ Proposed fix to handle non-UTF-8 paths gracefully
-        if self.cache.is_cached(relative_path.to_str().unwrap(), &hash) {
-            if let Some(cached_path) = self
-                .cache
-                .get_transform_path(relative_path.to_str().unwrap())
+        let relative_path_str = relative_path.to_string_lossy();
+        if self.cache.is_cached(&relative_path_str, &hash) {
+            if let Some(cached_path) = self
+                .cache
+                .get_transform_path(&relative_path_str)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/services/transformer.rs` around lines 58 - 63, The code
currently uses to_str().unwrap() on relative_path in the cache checks (in calls
to self.cache.is_cached and self.cache.get_transform_path) which will panic for
non-UTF-8 paths; change these to handle non-UTF-8 safely (e.g., use
relative_path.to_string_lossy().as_ref() or otherwise convert OsStr to a safe
string form) before passing to is_cached/get_transform_path and handle the
Option result from get_transform_path as before. Ensure you replace both
occurrences of to_str().unwrap() (the ones used in self.cache.is_cached and
self.cache.get_transform_path) with the safe conversion and keep existing logic
for cached_path handling.
crates/core/src/tui/scenes/dashboard.rs-341-349 (1)

341-349: ⚠️ Potential issue | 🟡 Minor

String formatting issue with styled status_display.

On line 347, format!("Status: {}", status_display) may not display correctly because status_display is a styled Span (from the .yellow().bold() calls), not a plain string. The format will use Debug or Display impl which may not render the ANSI codes properly in this context.

🐛 Proposed fix
-    let status_display = if state.paused {
-        format!("{} ⏸", state.status).yellow().bold()
-    } else {
-        state.status.clone().into()
-    };
-
-    let line2_left = format!("Status: {}", status_display);
+    let status_text = if state.paused {
+        format!("{} ⏸", state.status)
+    } else {
+        state.status.clone()
+    };
+
+    let line2_left = format!("Status: {}", status_text);

Note: If you want the status to be styled, you'd need to use ratatui's Line/Span API instead of a plain format string.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/tui/scenes/dashboard.rs` around lines 341 - 349,
status_display is a styled Span but is being embedded into a plain format!
Replace the string-based assembly (line2_left/line2_right -> line2) with a
ratatui text Line built from Spans: create Spans for the literal "Status: "
(normal), the styled status span when state.paused (use
Span::styled/state.status.clone()), and the execution part ("Execution: " +
state.execution_time) as separate Spans, then combine them into a Line/Spans and
pass that to the widget (instead of the formatted string); update references to
status_display, line2_left, line2_right, and line2 accordingly.
runtime/shm/shared-memory.ts-51-66 (1)

51-66: ⚠️ Potential issue | 🟡 Minor

Read operation lacks synchronization with concurrent writes.

The read() method checks the ready flag, reads data length, copies data, then clears the flag. If the Rust side writes while this is happening, there could be a partial read. The Atomics operations only synchronize the flag, not the data itself.

Consider adding a memory barrier or ensuring the protocol guarantees the writer doesn't write until the flag is cleared.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/shm/shared-memory.ts` around lines 51 - 66, The read() method can
return partially-written data because it reads the length and payload without
re-checking the ready flag; fix by looping: after Atomics.load(flagArray, 0)
indicates ready, read dataLen via DataView.getUint32 and copy the payload
(this.buffer.slice(HEADER_SIZE,...)), then immediately re-check the ready flag
with Atomics.load(flagArray, 0); if the flag is no longer 1, discard and retry
the whole sequence; only when the flag is still 1 do Atomics.store(flagArray, 0,
0) to clear it and return the data — this uses the Atomics loads/stores as
memory barriers and ensures consistency for read().
crates/core/src/services/transformer.rs-71-72 (1)

71-72: ⚠️ Potential issue | 🟡 Minor

Potential panic from .unwrap() on file_path.to_str().

If the file path is not valid UTF-8, this will panic. The transform_code function already accepts &str, so consider using to_string_lossy() for safety or returning an error.

🛡️ Proposed fix
-        let transformed = transform_code(&content, file_path.to_str().unwrap(), minify)
+        let file_path_str = file_path.to_string_lossy();
+        let transformed = transform_code(&content, &file_path_str, minify)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/services/transformer.rs` around lines 71 - 72, The call to
file_path.to_str().unwrap() can panic for non-UTF-8 paths; change the code that
calls transform_code(&content, file_path.to_str().unwrap(), minify) to avoid
unwrap by converting the path safely (e.g., use file_path.to_string_lossy() and
pass its &str or map to an Err with a clear message if to_str() is required), or
propagate a proper error instead of panicking so transform_code receives a valid
&str (refer to transform_code, file_path, content, minify).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 26-27: The PR jobs `fmt`, `clippy`, and `doc` currently use
`runs-on: self-hosted` which exposes self-hosted runners to untrusted PR code;
update the workflow to use GitHub-hosted runners for pull_request events (e.g.,
switch `runs-on` to a hosted label like `ubuntu-latest` for those jobs) or add a
fork-guard conditional so that when `github.event_name == 'pull_request'` the
`fmt`, `clippy`, and `doc` jobs do not run on `self-hosted` (or are skipped) and
only use self-hosted runners for trusted events (push/main). Ensure the change
targets the `fmt`, `clippy`, and `doc` job blocks that contain `runs-on:
self-hosted`.

In @.github/workflows/codecov.yml:
- Line 11: The workflow currently allows forked PRs to run on self-hosted
runners because it uses on.pull_request and runs-on: self-hosted; fix by
preventing forked PRs from targeting your self-hosted runner: either change the
PR-targeted job to use GitHub-hosted runners (e.g., replace runs-on: self-hosted
with a hosted label) or add a job-level guard that only runs when the pull
request comes from the same repo (use the GitHub context, e.g. an if condition
like: github.event_name != 'pull_request' ||
github.event.pull_request.head.repo.full_name == github.repository) so jobs
defined where runs-on: self-hosted (referenced in this file) refuse to execute
for forked PRs.

In @.github/workflows/release.yml:
- Around line 1-15: The autogenerated GitHub Actions workflow
.github/workflows/release.yml is out of date; regenerate it by running the
cargo-dist initializer to match the current cargo-dist version (0.31.0)
referenced in dist-workspace.toml: run the command dist init (or the equivalent
upgrade step for cargo-dist) in the repository root, commit the updated
.github/workflows/release.yml produced by dist, and verify the new workflow
aligns with the settings in dist-workspace.toml.

In `@crates/core/src/main.rs`:
- Around line 145-150: The hardcoded relative path "runtime" (used to set
runtime_src and runtime_dst in main.rs) fails if the process is not started from
the project root; change resolution to derive the runtime directory relative to
the executable location (use std::env::current_exe() and take its parent then
join("runtime")) before checking existence and continuing to use
ctx.checkpoint_dir.join("runtime") for the destination; optionally fall back to
an embedded/bundled resource if current_exe() cannot be resolved or the runtime
dir is missing.

In `@crates/shared/src/lib.rs`:
- Around line 99-114: The busy-wait in wait_receive currently calls
std::hint::spin_loop() for the entire timeout which will burn CPU; change it to
a hybrid spin/yield/sleep strategy in wait_receive: spin with
std::hint::spin_loop() only for a short initial period (e.g., a few hundred
microseconds or 1–2 ms based on elapsed), then call std::thread::yield_now() for
a bit, and for longer remaining waits use
std::thread::sleep(Duration::from_millis(1)) (or sleep for the min of remaining
timeout and 1ms) between iterations; keep calling self.receive() each loop and
preserve the timeout check logic so behavior and return value remain the same.
- Around line 72-92: The receive method reads data_len from the ChannelHeader
and uses it unchecked which can lead to out-of-bounds reads; update receive (the
function named receive, referencing self.shmem, SINGLE_CHANNEL_SIZE, HEADER_SIZE
and ChannelHeader) to validate that len (header.data_len.load(...)) is within
expected bounds (e.g., len as usize <= BUFFER_SIZE and > 0) before constructing
the slice; if the check fails, clear or reset header.ready_flag appropriately
and return None (or handle as an error) to avoid creating the slice from an
invalid length.

In `@examples/transform_examples/index.ts`:
- Around line 45-71: The request<T> method currently calls response.json()
unconditionally and calls clearTimeout(timeoutId) only after both fetch and json
succeed, which leaks the timeout and fails on empty/non-JSON bodies; wrap the
fetch+parse in a try/finally that always calls clearTimeout(timeoutId), and
parse conditionally by checking response.status (e.g., 204) or Content-Type
header before calling response.json(); use a safe fallback (null or raw text via
response.text()) when JSON parsing is not appropriate, and keep the
AbortController usage and timeoutId handling unchanged so the timer is always
cleaned up even on exceptions.

In `@runtime/checkpoint-runtime.ts`:
- Line 9: Wrap the call to SharedMemory.open(shmId) in a try-catch as done in
runtime/index.ts: catch any thrown error when opening the shared memory (the
call that assigns to shm) and handle it by logging the error and terminating or
returning an appropriate failure value so the exception is not unhandled; ensure
you reference the same symbols (SharedMemory.open, shmId, and the local shm
variable) when adding the try-catch and error logging/exit logic.
- Around line 41-45: The current loop using response and
shm.waitAndReadJson(1000) can hang forever; add a bounded wait (e.g. maxWaitMs
or maxRetries) and abort behavior: track elapsed time (or a retry counter)
around the loop that reads shm.waitAndReadJson<ExecuteResponse>, break and throw
a clear error (or return a failed result) when the timeout/retry limit is
reached, and optionally accept an AbortSignal to cancel early; update the code
that declares response, the while loop, and error handling so callers see a
deterministic failure instead of infinite blocking.

In `@runtime/index.ts`:
- Around line 37-45: The two consecutive shm.writeJson calls in handleCheckpoint
(the first writing {log, current_function} and the second writing {type:
"continue"}) can race; consolidate them into a single shm.writeJson call that
includes all fields (e.g. {type: "continue", log: logMessage, current_function:
functionName}) so the consumer always receives a single atomic message, or
alternatively implement a transactional/queued send in the shm wrapper so
writeJson guarantees ordering and atomicity.

In `@runtime/shm/ffi.ts`:
- Around line 3-4: The platform check for libcPath only handles Linux and treats
everything else as macOS which breaks on Windows; update the logic around the
libcPath constant in runtime/shm/ffi.ts to explicitly detect process.platform
=== "win32" (and any other unsupported platforms) and either throw a clear,
descriptive runtime error stating POSIX shm/dlopen are unsupported on Windows or
branch to a Windows-specific implementation if available; ensure the error
references libcPath and the failing operation so callers (e.g., any code that
uses dlopen/ffi via libcPath) get a clear message about unsupported platforms.

In `@runtime/shm/shared-memory.ts`:
- Around line 110-112: The close() implementation (calling
libc.symbols.munmap(this.pointer, TOTAL_SIZE)) must null out the mapped
references and prevent subsequent use; after the munmap call set this.buffer =
null and this.pointer = 0 (or null) and mark the region closed, add a private
ensureOpen() helper that throws if buffer/pointer indicate closed, and invoke
this.ensureOpen() at the start of read() and write() so they fail-fast on
use-after-free; reference the existing close(), read(), write(), TOTAL_SIZE and
libc.symbols.munmap symbols when making the changes.
- Around line 25-49: In SharedMemory.open, the mmap failure check is wrong
because it compares the numeric return from libc.symbols.mmap to a BigInt
literal; update the check in SharedMemory.open (the address variable returned by
mmap) to detect MAP_FAILED by comparing to -1 (e.g., address === -1) instead of
0xffffffffffffffffn, and keep the existing 0 check (address === 0 || address ===
-1) before throwing the "mmap failed" Error; reference libc.symbols.mmap and the
local address variable when making this change.

---

Outside diff comments:
In @.github/workflows/ci.yml:
- Around line 6-8: The CI path filter currently watches "src/**", "Cargo.toml",
and "Cargo.lock" but will miss changes moved into workspace crates; update the
workflow's paths filter to also include "crates/**" (and any nested crate
manifests such as crates/*/Cargo.toml and crates/*/Cargo.lock) so edits in
workspace crates trigger CI — replace or extend the existing "src/**" entry with
"crates/**" (and add patterns like "crates/*/Cargo.toml" /
"crates/*/Cargo.lock") in the same paths list.

---

Minor comments:
In @.github/workflows/web.yml:
- Around line 57-61: Replace the outdated actions/checkout@v3 reference with
actions/checkout@v4 in the workflow; specifically update the uses line that
currently reads "actions/checkout@v3" to "actions/checkout@v4" (preserving the
existing with: fetch-depth: 0 block) so the workflow uses the newer checkout
action with improved performance and Node.js 20 support.

In `@crates/core/Cargo.toml`:
- Line 14: Update the README.md Rust version badge to match the Cargo.toml
rust-version value; change the badge text from "rust-1.85+" to "rust-1.87+" so
it aligns with the `rust-version = "1.87"` setting in crates/core/Cargo.toml and
keep the badge formatting intact.

In `@crates/core/src/main.rs`:
- Around line 87-90: Replace the unwrap on entry_file.file_name() with explicit
handling: call entry_file.file_name() and if it returns None return or propagate
a descriptive error (or skip/log the entry) instead of panicking, then use the
obtained OsStr for building transformed_entry (i.e., change the
transformed_entry construction to use the checked filename value obtained via
ok_or_else/ok_or or a match and propagate/handle the error accordingly).

In `@crates/core/src/services/transformer.rs`:
- Around line 58-63: The code currently uses to_str().unwrap() on relative_path
in the cache checks (in calls to self.cache.is_cached and
self.cache.get_transform_path) which will panic for non-UTF-8 paths; change
these to handle non-UTF-8 safely (e.g., use
relative_path.to_string_lossy().as_ref() or otherwise convert OsStr to a safe
string form) before passing to is_cached/get_transform_path and handle the
Option result from get_transform_path as before. Ensure you replace both
occurrences of to_str().unwrap() (the ones used in self.cache.is_cached and
self.cache.get_transform_path) with the safe conversion and keep existing logic
for cached_path handling.
- Around line 71-72: The call to file_path.to_str().unwrap() can panic for
non-UTF-8 paths; change the code that calls transform_code(&content,
file_path.to_str().unwrap(), minify) to avoid unwrap by converting the path
safely (e.g., use file_path.to_string_lossy() and pass its &str or map to an Err
with a clear message if to_str() is required), or propagate a proper error
instead of panicking so transform_code receives a valid &str (refer to
transform_code, file_path, content, minify).

In `@crates/core/src/tui/scenes/dashboard.rs`:
- Around line 341-349: status_display is a styled Span but is being embedded
into a plain format! Replace the string-based assembly (line2_left/line2_right
-> line2) with a ratatui text Line built from Spans: create Spans for the
literal "Status: " (normal), the styled status span when state.paused (use
Span::styled/state.status.clone()), and the execution part ("Execution: " +
state.execution_time) as separate Spans, then combine them into a Line/Spans and
pass that to the widget (instead of the formatted string); update references to
status_display, line2_left, line2_right, and line2 accordingly.

In `@crates/core/src/utils/discord_rpc.rs`:
- Around line 283-301: The recv function reads a length from the header and
allocates buf without bounds checking; add a maximum allowed payload size (e.g.
MAX_PAYLOAD constant) and validate the parsed length before allocating buf in
recv, returning an io::Error (InvalidData or Other) when length is zero or
exceeds the max to avoid memory exhaustion; keep the rest of the logic (reading
buf, deserializing JSON, handling OpCode::Close) unchanged and reference the
variables header, length, buf and the recv method/OpCode::Close to locate the
change.

In `@doc/INSTALLATION.md`:
- Line 13: Replace the informal heading "If you wanna build from source" with a
formal alternative such as "Building from source" or "How to build from source";
update the heading text in doc/INSTALLATION.md (the line containing "If you
wanna build from source") to use proper formal language and capitalization.
- Line 29: The heading "Platform specific downloads" should hyphenate the
compound modifier; update the Markdown heading in doc/INSTALLATION.md (the line
that currently reads "## Platform specific downloads") to "## Platform-specific
downloads" so it reads correctly as a compound modifier.
- Around line 3-6: The heading "## If you wanna just downloading the
pre-compiled binary (Recommended)" and the list items use informal/incorrect
phrasing; update the heading to "## If you want to download the pre-compiled
binary (Recommended)" and change the list items to use proper terms: replace
"Github" with "GitHub" and "wanna" with "want to" (e.g., "If you want to
download pre-compiled binary with cargo-binstall:"). Ensure consistent
capitalization and formal tone across that section.

In `@doc/RELEASE.md`:
- Around line 3-5: Update the three broken relative links in doc/RELEASE.md so
they point to the repository root instead of the current doc/ directory: change
the link targets for [Cargo.toml], [Cargo.lock], and [CHANGELOG.md] to either
"../Cargo.toml", "../Cargo.lock", "../CHANGELOG.md" or to repository-root
absolute paths (e.g., "/Cargo.toml") so clicks from doc/RELEASE.md resolve
correctly.

In `@examples/transform_examples/.gitignore`:
- Around line 15-16: Replace the malformed gitignore patterns: change "_.log" to
"*.log" so all log files are ignored, and fix the report pattern
"report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json" to use the intended separators and
wildcards (e.g., "report.[0-9]_[0-9]_[0-9]_[0-9].json") so numeric report files
are properly matched.

In `@examples/transform_examples/index.ts`:
- Line 1: The HttpMethod union includes "PATCH" but the client class doesn't
expose a corresponding patch() helper; add a patch method on the same class that
exposes delete(), e.g. implement patch<T>(endpoint: string, body: unknown,
config?: RequestConfig): Promise<ApiResponse<T>> that forwards to the existing
request<T>("PATCH", endpoint, { ...config, body }); reference the HttpMethod
type, the request() method, RequestConfig and ApiResponse so callers can use the
new patch helper consistently with other verbs.

In `@oranda.json`:
- Around line 27-39: Update the AUR package name to match the others by changing
the value under artifacts.package_managers.additional.aur (currently "pacman -S
checkpoint") to use "checkpoint-ts" (e.g., "pacman -S checkpoint-ts"); ensure
the package string aligns with the preferred and other additional entries so
users find the same package across managers.

In `@README.md`:
- Around line 18-23: The Key Features section in README.md contains grammatical
errors and unclear phrasing; update the sentences to be clear and professional
by: changing "When you run program, first parses your code with AST's and
injects checkpointing functions." to a complete sentence like "When you run the
program, it first parses your code using the AST and injects checkpointing
functions."; changing "Then injects runtime codes and then a TUI appears to you
view." to "It then injects runtime code and displays a text-based user interface
(TUI)."; replacing informal "wanna checkpoint" / "wanna run" with "want to set a
checkpoint" and "want to run the checkpointed function", and make "variables
written to cache(in your .checkpoint folder)" into "the function name,
parameters, and variables are written to cache (in the .checkpoint folder)";
also reword "You can edit whatever you want. really. try it yourself." to a
professional line like "You can edit parameters and variables before re-running
checkpoints, and profile execution times for each function." Ensure these edits
are applied in the README Key Features section.

In `@runtime/index.ts`:
- Around line 114-117: In the switch handling the "shutdown" case (the block
that calls shm.close() and process.exit(0)), add a trailing break; after
process.exit(0) to prevent switch fallthrough and silence the linter; update the
case "shutdown": block (where shm.close() and process.exit(0) are invoked) to
include break; as the last statement.
- Around line 109-112: Before calling handleCheckpoint, validate message.payload
instead of directly casting to CheckpointPayload: implement or use a type guard
(e.g., isCheckpointPayload(payload)) that checks presence and types of required
fields like functionName and args, then in the "checkpoint" case (where
message.payload is currently cast) call handleCheckpoint only if validation
passes; otherwise log an error/throw or ignore the message to avoid passing
malformed data into handleCheckpoint.

In `@runtime/shm/shared-memory.ts`:
- Around line 51-66: The read() method can return partially-written data because
it reads the length and payload without re-checking the ready flag; fix by
looping: after Atomics.load(flagArray, 0) indicates ready, read dataLen via
DataView.getUint32 and copy the payload (this.buffer.slice(HEADER_SIZE,...)),
then immediately re-check the ready flag with Atomics.load(flagArray, 0); if the
flag is no longer 1, discard and retry the whole sequence; only when the flag is
still 1 do Atomics.store(flagArray, 0, 0) to clear it and return the data — this
uses the Atomics loads/stores as memory barriers and ensures consistency for
read().

In `@TODO.md`:
- Around line 1-3: The Markdown checklist items in TODO.md use plain bracket
notation like "[ ]" which doesn’t render as task list items; replace each line
such as "[ ] JSX/TSX support", "[+] Discord RPC", and "[ ] Settings page or
~/.config/checkpoint/config.yml" with GitHub-style task list syntax using "- [
]" for unchecked and "- [x]" for checked (convert the current "[+]" to "- [x]"
if meant checked) so they render as interactive task items in Markdown viewers.

---

Nitpick comments:
In @.github/ISSUE_TEMPLATE/bug_report.yaml:
- Around line 69-81: Remove the CLI-irrelevant browser dropdown field by
deleting the YAML block with id: browsers (label: Browser (if applicable),
options: Chrome/Firefox/Safari/Edge/Opera/Other) from the bug report template;
if you want to keep an optional environment field for edge cases, replace it
with a more generic "Environment" single-line or dropdown field (e.g., id:
environment, label: Environment or Platform) that suits CLI contexts instead of
browser-specific options.

In @.github/workflows/web.yml:
- Around line 79-81: The workflow uses the third-party action reference
"untitaker/hyperlink@0.1.29"; replace the tag with the commit SHA to pin the
action (e.g., "untitaker/hyperlink@<commit-sha>") to improve security and
reproducibility, updating the "uses" entry while leaving the "with: args:
/tmp/public/" configuration intact; fetch the action repo, find the current
stable commit SHA, and update the "uses" value accordingly.

In @.pre-commit-config.yaml:
- Line 6: Restore or document the disabled pre-commit hook for consistent file
endings: either re-enable the "end-of-file-fixer" hook by uncommenting the "-
id: end-of-file-fixer" entry in .pre-commit-config.yaml, or if there is a
deliberate reason to keep it disabled, add an inline comment next to the
commented line explaining the reason (e.g., tooling conflict) so future
contributors know why the hook was turned off.

In `@crates/core/src/main.rs`:
- Around line 92-94: Remove the informal "delete if u want" comment next to the
Discord RPC block in main.rs; either delete the line entirely or replace it with
a professional note such as "Optional: Discord RPC integration (can be removed
if not used)" so the comment remains clear and production-appropriate near the
Discord RPC logic in the main function or module.

In `@crates/core/src/services/ipc_bridge.rs`:
- Around line 17-22: The current runtime_dir resolution using
app_path_buf.parent().and_then(|p| p.parent()) is brittle; change it to a robust
lookup or configurable option: accept a runtime path from configuration/env
(e.g., a new parameter or IPC_BRIDGE_RUNTIME env var) and fall back to walking
ancestors from app_path_buf to find a directory named "runtime" (iterate parents
checking for a child "runtime" directory) before returning an error; update the
code that sets runtime_dir (the runtime_dir variable/assignment) to use the
configured value first, then the ancestor-search fallback, and document the new
behavior.
- Around line 70-85: Remove the fixed 200ms sleep before the polling loop and
instead rely on repeated calls to shm.wait_receive_json to detect
"runtime_ready" (or implement a short initial timeout/backoff) so we don't
always incur an unnecessary delay; update the loop that uses attempts, ready,
and buffered_messages (and the call to
shm.wait_receive_json::<serde_json::Value>) to use a shorter per-attempt timeout
(e.g., 100–200ms) or exponential backoff so total wait is bounded and responsive
rather than the current fixed 200ms + up to 10×500ms.
- Around line 120-125: The shutdown method currently discards errors from
send_json and child.wait; update ipc_bridge::shutdown to capture those Result
values and log failures instead of ignoring them: call self.send_json(...) and
if Err(e) use the crate's logging (e.g., log::error or tracing::error) to record
"failed to send shutdown message" with the error, and when waiting on the child
(inside the if let Some(ref mut child) = self.child) unwrap the Result from
child.wait() and log any Err as "child wait failed" with the error; keep the
method non-panicking and continue shutdown after logging.
- Around line 128-133: The Drop impl for IpcBridge should attempt a graceful
shutdown instead of unconditionally calling child.kill(); change the Drop for
IpcBridge to call self.shutdown() and handle its Result/Errors, falling back to
child.kill() only if shutdown fails or is unavailable. If shutdown() is async,
arrange to run it to completion (e.g., spawn a short-lived runtime or use a
blocking executor) inside Drop and handle errors, ensuring you don’t double-kill
an already-shutdown child; reference the Drop impl, the IpcBridge::shutdown()
method, and child.kill() when making the change.

In `@crates/core/src/services/transformer.rs`:
- Around line 27-42: The loop currently swallows WalkDir errors (including
symlink cycle detections) by using .filter_map(|e| e.ok()) after
WalkDir::follow_links(true); change it to handle the iterator's Result
explicitly: iterate over WalkDir::new(&self.ctx.root).follow_links(true)
(optionally with .max_depth(n) if you want a hard limit) and match on each entry
Result—on Ok(entry) run the existing is_transformable/transform_file logic
(using the same transformed push and transform_file call), and on Err(e) log a
warning or error (including e.to_string()) so symlink cycle errors are visible
instead of silently ignored.

In `@crates/core/src/tui/app.rs`:
- Line 71: The match arm uses a tight binding `_= self.handle_events()`; update
it to include a space after the underscore (i.e., change `_=` to `_ =`) in the
expression where self.handle_events() is awaited/handled (the `_ =
self.handle_events() => {}` match arm) to match formatting conventions.

In `@crates/core/src/tui/scenes/dashboard.rs`:
- Around line 185-197: In continue_execution, do not ignore the Result from
ipc_bridge.send_json; call bridge.send_json(...) and match its Result—on Ok,
proceed to set self.paused = false, self.pending_checkpoint = None, self.status
= "Running", and call self.add_log("Execution continued"); on Err, log the error
(e.g., via self.add_log or a logger) and keep the local state unchanged so it
remains paused; update code around the ipc_bridge/send_json call in
continue_execution to perform this error handling and avoid optimistic state
changes.
- Around line 199-212: The skip_function currently updates paused,
pending_checkpoint, status, and logs regardless of whether ipc_bridge.send_json
succeeded; change skip_function so it captures the result of
bridge.send_json(&json!({...})) and only perform the state changes (set paused =
false, pending_checkpoint = None, status = "Running".to_string(), call
add_log("Function skipped")) when send_json returns Ok; on Err, record/log the
error instead (e.g. via add_log or similar error logging) and do not mutate
paused/pending_checkpoint/status. Ensure you reference the ipc_bridge, send_json
call, and state fields (paused, pending_checkpoint, status, add_log) when
implementing the conditional behavior.
- Around line 105-115: In poll_ipc_messages the current call to
ipc_bridge.receive_json::<serde_json::Value> hides parse failures by returning
None; change the logic to pull the raw message (use the bridge method that
returns the raw/text payload or bytes instead of receive_json), attempt
serde_json::from_str/from_slice yourself, and on Err call self.add_log with a
clear parse-failure message including the error and the raw payload so the bad
message is not silently lost; on Ok continue to call self.add_log(...) and
self.handle_ipc_message(...) as before. Ensure you update poll_ipc_messages and
refer to ipc_bridge.receive_json (replace it with the raw receive method),
serde_json::from_str/from_slice, add_log, and handle_ipc_message.

In `@crates/core/src/tui/scenes/history.rs`:
- Line 16: The ExecutionHistoryState.selected_index field is set but never
applied to the UI; update draw_function_timeline to use that index to highlight
the current row by passing or mapping selected_index into the table state/row
rendering logic (e.g., set the TableState::selected(Some(selected_index)) or
apply a style when iterating rows) so the row at
ExecutionHistoryState.selected_index is visually highlighted; modify
draw_function_timeline to read ExecutionHistoryState.selected_index and apply it
to the table renderer or row styling accordingly.
- Around line 36-104: The Default impl for ExecutionHistoryState currently
returns hardcoded sample data; change default() to return an empty/zeroed state
(e.g., total_functions=0, skipped=0, executed=0, total_time=0.0,
function_calls=Vec::new(), selected_index=0) and extract the current sample
vector construction into a new associated constructor named with_sample_data()
that returns an ExecutionHistoryState populated with the existing example
FunctionCall entries; update any tests or examples to call
ExecutionHistoryState::with_sample_data() when they need demo data and keep
Default clean for production/real usage.

In `@crates/core/src/utils/discord_rpc.rs`:
- Around line 34-38: The Button struct currently derives Debug but not Clone
while Activity derives Clone and contains Vec<Button>, so add Clone to Button's
derive list (i.e., change #[derive(Debug)] to #[derive(Debug, Clone)]) so
Activity can be cloned along with its Vec<Button> elements; update the Button
declaration (pub struct Button { pub label: String, pub url: String })
accordingly.
- Around line 337-339: The connected() method currently only checks that
self.rpc is Some and thus can return true for a dead/closed connection; change
connected() to verify actual connection health by consulting the inner RPC
object's liveness API (e.g., call a method like
is_connected()/is_open()/is_healthy() or perform a lightweight ping) so it
returns true only when rpc is Some and that rpc reports an active connection;
update the implementation in the connected() function to use rpc.as_ref() and
the inner health check and document the behavior for callers.

In `@crates/core/src/utils/entry_finder.rs`:
- Around line 20-21: The error produced when find_entry_file receives None uses
PathNotFound(PathBuf::new()) which yields an empty path in messages; change this
to return a more descriptive error by either adding a new EntryError variant
(e.g., EntryError::MissingPath or EntryError::InputNone) and returning that from
find_entry_file, or by passing a placeholder PathBuf (e.g.,
PathBuf::from("<missing path>")) into the existing PathNotFound variant; update
the EntryError enum (and any display/formatting implementations and tests)
accordingly and replace the ok_or_else call in find_entry_file to create the
new/placeholder error.
- Line 56: The JSON parse error is being discarded when calling
serde_json::from_str(&content) and mapping all errors to
EntryError::InvalidPackageJson; update the error mapping to preserve context by
passing the original serde_json::Error (or its message) into the EntryError
variant. Locate the serde_json::from_str(&content) call (the pkg: Value
assignment) and change .map_err(|_| EntryError::InvalidPackageJson) to something
like .map_err(|e| EntryError::InvalidPackageJson(e)) or .map_err(|e|
EntryError::InvalidPackageJson(e.to_string())), and if necessary extend the
EntryError::InvalidPackageJson variant to accept and store the serde_json::Error
or a String so callers can see the original parse details.

In `@crates/core/src/utils/project_context.rs`:
- Around line 32-50: The function find_project_root currently returns
start_dir.to_path_buf() when no package.json is found, which can be a file path
and is ambiguous; change find_project_root to return Option<PathBuf> (or
Result<PathBuf, _>) so callers can handle the "not found" case: keep the
existing logic for determining start_dir (use entry_file.parent() as before),
return Some(current) when current.join("package.json").exists(), and return None
when current.pop() returns false (i.e., no package.json found); update all call
sites that rely on find_project_root to handle the Option/Result accordingly
(e.g., propagate the None/Error or provide a clear fallback). Ensure function
name find_project_root and the variables start_dir and current.pop() are used to
locate and modify the code.
- Around line 54-56: The code currently uses unwrap_or_default() when computing
global_config which yields an empty PathBuf if dirs::home_dir() is None; change
this to preserve the Option and handle the None case explicitly: replace the let
global_config = ... unwrap_or_default() with let global_config =
dirs::home_dir().map(|h| h.join(".checkpointrc.json")); then update any
subsequent logic that calls global_config.exists() to first check Option (e.g.,
if let Some(path) = &global_config { if path.exists() { ... } }) so the absence
of a home dir is handled clearly instead of relying on an empty PathBuf.

In `@crates/core/src/utils/transform_cache.rs`:
- Around line 35-38: The save method in transform_cache.rs currently returns
std::io::Result and uses serde_json::to_string_pretty(self)? which converts
serde_json::Error into io::Error and can lose context; change the signature of
TransformCache::save (function save) to return a richer error type consistent
with the codebase (e.g. Result<(), Box<dyn std::error::Error>> or the crate's
custom error enum) and map or propagate the serde_json::to_string_pretty and
fs::write errors without implicit conversion (use map_err to wrap
serde_json::to_string_pretty errors into the chosen error type or implement From
conversions), ensuring the original serde error details are preserved in the
returned error.

In `@crates/parser/src/lib.rs`:
- Around line 492-592: create_async_wrapper and create_sync_wrapper duplicate
almost identical code; extract a single helper (e.g., create_wrapper or
build_checkpoint_call) that takes the varying pieces (method name:
"executeAsync" vs "execute", and any other differing params) and build the
MemberExpr, function_name_arg, function_identifier_arg, original_args_arg,
optional context_arg, and CallExpr once; replace create_async_wrapper and
create_sync_wrapper to call this new helper, passing executeAsync/execute and
the same name/args/context so behavior and return type (CallExpr) remain
unchanged.
- Around line 151-156: The code clones entire AST nodes (e.g., calling
mark_expression(&Expr::Call(expr.clone()), ...) in mark_function_call and
similarly in visit_mut_new_expr) even though mark_expression only needs the
span; remove the clones by changing mark_expression to accept a Span (or add a
small helper like mark_expression_span(span: Span, ctx: ExprContext)) and call
that with expr.span (or node.span) instead of constructing
Expr::Call(expr.clone())/Expr::New(node.clone()); update all call sites
(mark_function_call and visit_mut_new_expr) to pass the span-based API and use
ExprId::from(span) inside the helper.

In `@doc/RELEASE.md`:
- Line 10: The release doc's step "Publish to crates.io: `cargo publish`" is
ambiguous for a Cargo workspace; update the instruction to specify which
crate(s) to publish (e.g., use `cargo publish --manifest-path
crates/core/Cargo.toml` or `cargo publish --package core`) or document repeating
`cargo publish` per crate (`crates/core`, `crates/parser`, `crates/shared`) and
any order/dependency constraints; reference the workspace crates by name (core,
parser, shared) and show the explicit flags/manifest-path approach so
maintainers know how to publish each crate correctly.

In `@examples/transform_examples/index.ts`:
- Line 98: The module currently exports ApiClient, ApiResponse, and
RequestConfig but not the HttpMethod type used by RequestConfig; update the
export statement to also export the HttpMethod type so consumers can reference
it. Locate the HttpMethod type definition and add it to the existing export list
(alongside ApiClient, type ApiResponse, type RequestConfig) so consumers can
import HttpMethod directly.

In `@examples/transform_examples/package.json`:
- Line 7: The dependency entry for "@types/bun" currently uses the
non-deterministic tag "latest"; update the package.json dependency for
"@types/bun" to a specific SemVer (e.g. a caret range like "^1.0.0" or an exact
version such as "1.2.3") to ensure reproducible installs, then run your lockfile
update (npm/yarn/pnpm) to persist the exact resolved version.
- Around line 9-10: Remove "typescript" from the "peerDependencies" object in
package.json and add it under "devDependencies" instead; specifically locate the
"peerDependencies" block containing the "typescript" key and move that
dependency entry into the "devDependencies" section so the example (private:
true) package treats TypeScript as a build/test tool rather than a
consumer-provided peer dependency.

In `@runtime/package.json`:
- Around line 9-12: The devDependency "@types/bun" is pinned to "latest" which
breaks reproducible builds; update the runtime package.json devDependencies
entry for "@types/bun" (the key "@types/bun" in package.json) to a specific
version (e.g. "1.3.6") so installs are deterministic across machines and CI.

In `@runtime/shm/shared-memory.ts`:
- Around line 93-102: waitAndRead currently busy-waits using performance.now()
and read(), causing 100% CPU; convert waitAndRead to an async function
(waitAndRead(timeoutMs: number): Promise<Uint8Array | null>) and insert a small
yield/sleep inside the loop (e.g., await a Promise-based sleep or a microtask
yield) so the loop does not spin; ensure you still check remaining time (use
performance.now() to compute remaining and cap the sleep to that) and update any
callers of waitAndRead to await the Promise-returning method.

Comment on lines +26 to 27
runs-on: self-hosted
steps:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/ci.yml | head -70

Repository: ErenayDev/checkpoint-ts

Length of output: 2139


🏁 Script executed:

# Let me see more context around the test job to check if there are other self-hosted runners
cat -n .github/workflows/ci.yml | sed -n '66,100p'

Repository: ErenayDev/checkpoint-ts

Length of output: 776


Self-hosted runners on PR jobs are a security risk.

The fmt, clippy, and doc jobs run on self-hosted runners while the workflow is triggered by pull_request events, exposing runner hosts to untrusted PR code. Use GitHub-hosted runners for PR events or implement fork guards to prevent self-hosted execution on external contributions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 26 - 27, The PR jobs `fmt`, `clippy`,
and `doc` currently use `runs-on: self-hosted` which exposes self-hosted runners
to untrusted PR code; update the workflow to use GitHub-hosted runners for
pull_request events (e.g., switch `runs-on` to a hosted label like
`ubuntu-latest` for those jobs) or add a fork-guard conditional so that when
`github.event_name == 'pull_request'` the `fmt`, `clippy`, and `doc` jobs do not
run on `self-hosted` (or are skipped) and only use self-hosted runners for
trusted events (push/main). Ensure the change targets the `fmt`, `clippy`, and
`doc` job blocks that contain `runs-on: self-hosted`.

jobs:
coverage:
runs-on: ubuntu-latest
runs-on: self-hosted
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# First, let's look at the codecov.yml file structure
echo "=== Checking .github/workflows/codecov.yml ==="
if [ -f ".github/workflows/codecov.yml" ]; then
  cat -n ".github/workflows/codecov.yml"
else
  echo "File not found, checking for similar files..."
  fd -t f 'codecov' .github/workflows/ 2>/dev/null || echo "No codecov workflows found"
fi

Repository: ErenayDev/checkpoint-ts

Length of output: 1065


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Run the verification script from the review comment
# to identify workflows combining pull_request + self-hosted
fd -t f '.+\.ya?ml$' .github/workflows 2>/dev/null | while read -r wf; do
  if rg -l 'pull_request' "$wf" 2>/dev/null | grep -q .; then
    if rg -l 'self-hosted' "$wf" 2>/dev/null | grep -q .; then
      echo "=== Potentially unsafe workflow: $wf ==="
      echo "Lines with pull_request trigger:"
      rg -n 'pull_request' "$wf" 2>/dev/null || true
      echo ""
      echo "Lines with self-hosted:"
      rg -n 'self-hosted' "$wf" 2>/dev/null || true
      echo ""
      echo "Checking for fork detection guards:"
      rg -n "fork\|trusted\|if:" "$wf" 2>/dev/null || echo "No fork detection found"
      echo ""
    fi
  fi
done

Repository: ErenayDev/checkpoint-ts

Length of output: 1628


Do not execute untrusted PR jobs on self-hosted runners.

With on.pull_request (Line 6) and runs-on: self-hosted (Line 11), forked PR code can run on your infrastructure. This pattern exists in multiple workflows (codecov.yml, ci.yml, release.yml, web.yml) with no fork detection guards. Gate fork PRs or move PR jobs to GitHub-hosted runners.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/codecov.yml at line 11, The workflow currently allows
forked PRs to run on self-hosted runners because it uses on.pull_request and
runs-on: self-hosted; fix by preventing forked PRs from targeting your
self-hosted runner: either change the PR-targeted job to use GitHub-hosted
runners (e.g., replace runs-on: self-hosted with a hosted label) or add a
job-level guard that only runs when the pull request comes from the same repo
(use the GitHub context, e.g. an if condition like: github.event_name !=
'pull_request' || github.event.pull_request.head.repo.full_name ==
github.repository) so jobs defined where runs-on: self-hosted (referenced in
this file) refuse to execute for forked PRs.

Comment on lines +1 to +15
# This file was autogenerated by dist: https://axodotdev.github.io/cargo-dist
#
# Copyright 2022-2024, axodotdev
# SPDX-License-Identifier: MIT or Apache-2.0
#
# CI that:
#
# * checks for a Git Tag that looks like a release
# * builds artifacts with dist (archives, installers, hashes)
# * uploads those artifacts to temporary workflow zip
# * on success, uploads the artifacts to a GitHub Release
#
# Note that the GitHub Release will be created with a generated
# title/body based on your changelogs.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Regenerate this workflow file.

The pipeline failure indicates this autogenerated workflow is out of date. Run dist init to regenerate it and ensure it matches the current cargo-dist version (0.31.0) specified in dist-workspace.toml.

🧰 Tools
🪛 GitHub Actions: Release

[error] 1-1: Workflow file is out of date and needs regeneration. Diff shows changes (e.g., runs-on lines changed). Run 'dist init' to update the file. Process completed with exit code 255.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release.yml around lines 1 - 15, The autogenerated GitHub
Actions workflow .github/workflows/release.yml is out of date; regenerate it by
running the cargo-dist initializer to match the current cargo-dist version
(0.31.0) referenced in dist-workspace.toml: run the command dist init (or the
equivalent upgrade step for cargo-dist) in the repository root, commit the
updated .github/workflows/release.yml produced by dist, and verify the new
workflow aligns with the settings in dist-workspace.toml.

Comment on lines +145 to +150
let runtime_src = std::path::Path::new("runtime");
let runtime_dst = ctx.checkpoint_dir.join("runtime");

if !runtime_src.exists() {
return Err(eyre::eyre!("Runtime directory not found at ./runtime"));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, check the context around the problematic code
echo "=== Context around lines 145-150 in crates/core/src/main.rs ==="
sed -n '130,165p' crates/core/src/main.rs

echo ""
echo "=== Check for build.rs ==="
find crates/core -name "build.rs" -exec echo "Found: {}" \; -exec cat {} \;

echo ""
echo "=== Check Cargo.toml for runtime references ==="
grep -A5 -B5 "runtime" crates/core/Cargo.toml || echo "No explicit runtime references in Cargo.toml"

Repository: ErenayDev/checkpoint-ts

Length of output: 1281


🏁 Script executed:

#!/bin/bash
# Check where copy_runtime_files is called from
echo "=== Where copy_runtime_files is called ==="
rg -n "copy_runtime_files" crates/core/src/main.rs

echo ""
echo "=== Check main function and initialization context ==="
sed -n '1,100p' crates/core/src/main.rs | head -80

echo ""
echo "=== Check if there's any working directory setup ==="
rg -n "current_dir\|set_current_dir\|env::current" crates/core/src/main.rs

echo ""
echo "=== Check if runtime directory exists at repo root ==="
ls -la runtime 2>/dev/null || echo "No runtime directory at repository root"

echo ""
echo "=== Check Cargo.toml binary target configuration ==="
cat crates/core/Cargo.toml

Repository: ErenayDev/checkpoint-ts

Length of output: 4428


Hardcoded runtime path assumes execution from project root.

The path "runtime" is relative to the current working directory, not the binary location. If the user runs checkpoint from a different directory or after the binary is installed, this will fail with "Runtime directory not found at ./runtime."

Consider resolving the runtime path relative to the executable location using std::env::current_exe() or embedding it as a resource during the build process.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/core/src/main.rs` around lines 145 - 150, The hardcoded relative path
"runtime" (used to set runtime_src and runtime_dst in main.rs) fails if the
process is not started from the project root; change resolution to derive the
runtime directory relative to the executable location (use
std::env::current_exe() and take its parent then join("runtime")) before
checking existence and continuing to use ctx.checkpoint_dir.join("runtime") for
the destination; optionally fall back to an embedded/bundled resource if
current_exe() cannot be resolved or the runtime dir is missing.

Comment on lines +72 to +92
pub fn receive(&self) -> Option<Vec<u8>> {
let header =
unsafe { &*(self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE) as *const ChannelHeader) };

if header.ready_flag.load(Ordering::SeqCst) == 0 {
return None;
}

let len = header.data_len.load(Ordering::SeqCst) as usize;
let buffer = unsafe {
std::slice::from_raw_parts(
self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE + HEADER_SIZE),
len,
)
};

let data = buffer.to_vec();
header.ready_flag.store(0, Ordering::SeqCst);

Some(data)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing bounds validation on received data length.

The len value is read from shared memory and used directly to create a slice. If the sender (or a malicious process) writes a len greater than BUFFER_SIZE, this creates an out-of-bounds read, causing undefined behavior.

🐛 Proposed fix to add bounds check
     pub fn receive(&self) -> Option<Vec<u8>> {
         let header =
             unsafe { &*(self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE) as *const ChannelHeader) };

         if header.ready_flag.load(Ordering::SeqCst) == 0 {
             return None;
         }

         let len = header.data_len.load(Ordering::SeqCst) as usize;
+        if len > BUFFER_SIZE {
+            header.ready_flag.store(0, Ordering::SeqCst);
+            return None; // or return an error
+        }
         let buffer = unsafe {
             std::slice::from_raw_parts(
                 self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE + HEADER_SIZE),
                 len,
             )
         };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn receive(&self) -> Option<Vec<u8>> {
let header =
unsafe { &*(self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE) as *const ChannelHeader) };
if header.ready_flag.load(Ordering::SeqCst) == 0 {
return None;
}
let len = header.data_len.load(Ordering::SeqCst) as usize;
let buffer = unsafe {
std::slice::from_raw_parts(
self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE + HEADER_SIZE),
len,
)
};
let data = buffer.to_vec();
header.ready_flag.store(0, Ordering::SeqCst);
Some(data)
}
pub fn receive(&self) -> Option<Vec<u8>> {
let header =
unsafe { &*(self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE) as *const ChannelHeader) };
if header.ready_flag.load(Ordering::SeqCst) == 0 {
return None;
}
let len = header.data_len.load(Ordering::SeqCst) as usize;
if len > BUFFER_SIZE {
header.ready_flag.store(0, Ordering::SeqCst);
return None; // or return an error
}
let buffer = unsafe {
std::slice::from_raw_parts(
self.shmem.as_ptr().add(SINGLE_CHANNEL_SIZE + HEADER_SIZE),
len,
)
};
let data = buffer.to_vec();
header.ready_flag.store(0, Ordering::SeqCst);
Some(data)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/shared/src/lib.rs` around lines 72 - 92, The receive method reads
data_len from the ChannelHeader and uses it unchecked which can lead to
out-of-bounds reads; update receive (the function named receive, referencing
self.shmem, SINGLE_CHANNEL_SIZE, HEADER_SIZE and ChannelHeader) to validate that
len (header.data_len.load(...)) is within expected bounds (e.g., len as usize <=
BUFFER_SIZE and > 0) before constructing the slice; if the check fails, clear or
reset header.ready_flag appropriately and return None (or handle as an error) to
avoid creating the slice from an invalid length.

Comment on lines +41 to +45
let response: ExecuteResponse | null = null;

while (!response) {
response = shm.waitAndReadJson<ExecuteResponse>(1000);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Infinite blocking loop without timeout or cancellation.

This loop waits indefinitely for a response. If the checkpoint controller crashes or fails to respond, the application will hang forever. Consider adding a maximum wait time or retry limit.

Proposed fix with timeout
   let response: ExecuteResponse | null = null;
+  const maxWaitMs = 30000; // 30 second timeout
+  const startTime = performance.now();
 
   while (!response) {
+    if (performance.now() - startTime > maxWaitMs) {
+      throw new Error(`Checkpoint timeout waiting for response on ${functionName}`);
+    }
     response = shm.waitAndReadJson<ExecuteResponse>(1000);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/checkpoint-runtime.ts` around lines 41 - 45, The current loop using
response and shm.waitAndReadJson(1000) can hang forever; add a bounded wait
(e.g. maxWaitMs or maxRetries) and abort behavior: track elapsed time (or a
retry counter) around the loop that reads shm.waitAndReadJson<ExecuteResponse>,
break and throw a clear error (or return a failed result) when the timeout/retry
limit is reached, and optionally accept an AbortSignal to cancel early; update
the code that declares response, the while loop, and error handling so callers
see a deterministic failure instead of infinite blocking.

Comment on lines +37 to +45
shm.writeJson({
log: logMessage,
current_function: functionName,
});

shm.writeJson({
type: "continue",
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Two consecutive writeJson calls may cause a race condition.

handleCheckpoint writes two separate messages in quick succession. The first message (log/current_function) may be overwritten before the consumer reads it, or the consumer might receive messages out of order depending on the IPC protocol.

Consider consolidating into a single message or ensuring the protocol handles this correctly.

Proposed fix - consolidate messages
 function handleCheckpoint(payload: CheckpointPayload): void {
   const { functionName, args, context } = payload;
   const logMessage = context
     ? `${context}.${functionName}(${JSON.stringify(args)})`
     : `${functionName}(${JSON.stringify(args)})`;
 
   shm.writeJson({
+    type: "continue",
     log: logMessage,
     current_function: functionName,
   });
-
-  shm.writeJson({
-    type: "continue",
-  });
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
shm.writeJson({
log: logMessage,
current_function: functionName,
});
shm.writeJson({
type: "continue",
});
}
shm.writeJson({
type: "continue",
log: logMessage,
current_function: functionName,
});
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/index.ts` around lines 37 - 45, The two consecutive shm.writeJson
calls in handleCheckpoint (the first writing {log, current_function} and the
second writing {type: "continue"}) can race; consolidate them into a single
shm.writeJson call that includes all fields (e.g. {type: "continue", log:
logMessage, current_function: functionName}) so the consumer always receives a
single atomic message, or alternatively implement a transactional/queued send in
the shm wrapper so writeJson guarantees ordering and atomicity.

Comment on lines +3 to +4
const libcPath =
process.platform === "linux" ? "libc.so.6" : "libSystem.B.dylib";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing Windows and other platform support.

The platform detection only handles Linux and assumes macOS for everything else. On Windows, dlopen with libSystem.B.dylib will fail at runtime since POSIX shared memory APIs don't exist.

Consider adding explicit error handling for unsupported platforms:

🛡️ Proposed fix
-const libcPath =
-  process.platform === "linux" ? "libc.so.6" : "libSystem.B.dylib";
+const libcPath = (() => {
+  switch (process.platform) {
+    case "linux":
+      return "libc.so.6";
+    case "darwin":
+      return "libSystem.B.dylib";
+    default:
+      throw new Error(`Unsupported platform for shared memory: ${process.platform}`);
+  }
+})();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const libcPath =
process.platform === "linux" ? "libc.so.6" : "libSystem.B.dylib";
const libcPath = (() => {
switch (process.platform) {
case "linux":
return "libc.so.6";
case "darwin":
return "libSystem.B.dylib";
default:
throw new Error(`Unsupported platform for shared memory: ${process.platform}`);
}
})();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/shm/ffi.ts` around lines 3 - 4, The platform check for libcPath only
handles Linux and treats everything else as macOS which breaks on Windows;
update the logic around the libcPath constant in runtime/shm/ffi.ts to
explicitly detect process.platform === "win32" (and any other unsupported
platforms) and either throw a clear, descriptive runtime error stating POSIX
shm/dlopen are unsupported on Windows or branch to a Windows-specific
implementation if available; ensure the error references libcPath and the
failing operation so callers (e.g., any code that uses dlopen/ffi via libcPath)
get a clear message about unsupported platforms.

Comment on lines +25 to +49
static open(shmName: string): SharedMemory {
const nameBuffer = Buffer.from(shmName + "\0");

const fd = libc.symbols.shm_open(ptr(nameBuffer), O_RDWR, 0o666);
if (fd < 0) {
throw new Error(`shm_open failed: ${shmName}`);
}

const address = libc.symbols.mmap(
null,
TOTAL_SIZE,
PROT_READ | PROT_WRITE,
MAP_SHARED,
fd,
0,
);

libc.symbols.close(fd);

if (address === 0 || address === 0xffffffffffffffffn) {
throw new Error("mmap failed");
}

return new SharedMemory(shmName, Number(address));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n runtime/shm/shared-memory.ts

Repository: ErenayDev/checkpoint-ts

Length of output: 3590


🌐 Web query:

Bun FFI mmap return value pointer type

💡 Result:

In Bun bun:ffi, mmap() (C return type void *) should be declared as returning FFIType.ptr (alias "ptr"/"pointer"). Bun represents pointers in JavaScript as a number (not BigInt). [1][2]

If you want to use the mapped memory, convert the returned pointer to an ArrayBuffer with toArrayBuffer(ptr, byteOffset, byteLength) (for mmap, you typically must supply byteLength). [2]

Sources:
[1] Bun FFI docs – supported FFI types (ptr = void*) https://bun.com/docs/runtime/ffi
[2] Bun FFI docs – pointers are number, and toArrayBuffer for reading pointers https://bun.com/docs/runtime/ffi


🌐 Web query:

MAP_FAILED error handling mmap POSIX standard

💡 Result:

In POSIX, mmap() signals failure by returning MAP_FAILED (not NULL) and setting errno:

  • Check for failure:

    void *p = mmap(...);
    if (p == MAP_FAILED) { /* use errno */ }

    On error, mmap() “shall return a value of MAP_FAILED and set errno to indicate the error,” and no successful call may return MAP_FAILED. [1]

  • What is MAP_FAILED: It’s a macro defined in <sys/mman.h>; commonly (void *)-1. [2]

  • Use errno for the reason: POSIX defines many possible errno values (commonly EACCES, EBADF, EINVAL, ENOMEM, EAGAIN, etc.). [1]

  • Important POSIX side-effect on failure: If mmap() fails for reasons other than EBADF, EINVAL, or ENOTSUP, some existing mappings in the target address range may have been unmapped—so you must not assume the address space is unchanged after a failed call. [3]

Sources:
[1] POSIX mmap(3p) (man7.org) https://man7.org/linux/man-pages/man3/mmap.3p.html
[2] MAP_FAILED definition example ((void *)-1) (man7 mmap return value text) https://man7.org/linux/man-pages/man2/munmap.2.html
[3] mmap() failure may unmap existing mappings (Oracle mmap(2)) https://docs.oracle.com/cd/E36784_01/html/E36872/mmap-2.html


Fix mmap error check: BigInt literal won't match number return type from FFI.

The check address === 0xffffffffffffffffn compares a number (returned by Bun FFI's mmap) against a BigInt literal. This comparison always evaluates to false, so mmap failures returning -1 (MAP_FAILED) will not be caught. The check should use a numeric literal: address === -1 or a hex equivalent like 0xffffffff (for 32-bit) or Number.MIN_SAFE_INTEGER depending on the expected pointer width.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/shm/shared-memory.ts` around lines 25 - 49, In SharedMemory.open, the
mmap failure check is wrong because it compares the numeric return from
libc.symbols.mmap to a BigInt literal; update the check in SharedMemory.open
(the address variable returned by mmap) to detect MAP_FAILED by comparing to -1
(e.g., address === -1) instead of 0xffffffffffffffffn, and keep the existing 0
check (address === 0 || address === -1) before throwing the "mmap failed" Error;
reference libc.symbols.mmap and the local address variable when making this
change.

Comment on lines +110 to +112
close(): void {
libc.symbols.munmap(this.pointer, TOTAL_SIZE);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

close() does not prevent use-after-free.

After calling close(), the buffer field still points to unmapped memory. Any subsequent read() or write() calls would access invalid memory.

Consider nullifying the buffer and pointer, and adding guards to other methods.

🛡️ Proposed fix
   close(): void {
     libc.symbols.munmap(this.pointer, TOTAL_SIZE);
+    this.pointer = 0;
+    this.buffer = new Uint8Array(0);
   }
+
+  private ensureOpen(): void {
+    if (this.pointer === 0) {
+      throw new Error("SharedMemory has been closed");
+    }
+  }

Then add this.ensureOpen() at the start of read() and write() methods.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
close(): void {
libc.symbols.munmap(this.pointer, TOTAL_SIZE);
}
close(): void {
libc.symbols.munmap(this.pointer, TOTAL_SIZE);
this.pointer = 0;
this.buffer = new Uint8Array(0);
}
private ensureOpen(): void {
if (this.pointer === 0) {
throw new Error("SharedMemory has been closed");
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/shm/shared-memory.ts` around lines 110 - 112, The close()
implementation (calling libc.symbols.munmap(this.pointer, TOTAL_SIZE)) must null
out the mapped references and prevent subsequent use; after the munmap call set
this.buffer = null and this.pointer = 0 (or null) and mark the region closed,
add a private ensureOpen() helper that throws if buffer/pointer indicate closed,
and invoke this.ensureOpen() at the start of read() and write() so they
fail-fast on use-after-free; reference the existing close(), read(), write(),
TOTAL_SIZE and libc.symbols.munmap symbols when making the changes.

Signed-off-by: Erenay <103513350+ErenayDev@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant