Skip to content

fix(pack): client chunking output root#2600

Merged
xusd320 merged 1 commit intonextfrom
fix/client-chunking-root
Feb 13, 2026
Merged

fix(pack): client chunking output root#2600
xusd320 merged 1 commit intonextfrom
fix/client-chunking-root

Conversation

@xusd320
Copy link
Copy Markdown
Contributor

@xusd320 xusd320 commented Feb 13, 2026

This pull request makes a targeted change to the output root directory used for client chunking in the Project implementation. The update ensures that the distribution root (dist_root) is used instead of the client root (client_root) when initializing the chunking context, which likely aligns output paths with the intended build output structure.

Build output path update:

  • Changed the output_root parameter in the client chunking context initialization from client_root to dist_root in crates/pack-api/src/project.rs, ensuring output paths are consistent with the distribution directory.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @xusd320, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the client chunking process by ensuring that generated output files are directed to the correct distribution root. This change standardizes the output directory, preventing potential misconfigurations and aligning the build output with the project's intended distribution structure.

Highlights

  • Client Chunking Output Path: The output_root parameter for client chunking context initialization has been updated to use dist_root instead of client_root.
Changelog
  • crates/pack-api/src/project.rs
    • Updated the output_root parameter in get_client_chunking_context to use self.dist_root() instead of self.client_root().
Activity
  • No specific activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request makes a targeted change to correct the output root for the client chunking context. By switching from client_root to dist_root, the output path is now correctly aligned with the build's distribution directory. This is a good fix that ensures consistency with other parts of the codebase, like the library chunking context, and correctly uses a disk-based path for output instead of a virtual one. The change is sound and improves the build process's correctness.

@github-actions
Copy link
Copy Markdown

📊 Performance Benchmark Report (with-antd)

Utoopack Performance Report

Report ID: utoopack_performance_report_20260213_100712
Generated: 2026-02-13 10:07:12
Trace File: trace_antd.json (0.5GB, 3.18M events)
Test Project: examples/with-antd


Executive Summary

Key Findings

Metric Value Assessment
Total Wall Time 7,990.1 ms Baseline
Total Thread Work (de-duped) 25,068.1 ms Non-overlapping busy time
Effective Parallelism 3.1x thread_work / wall_time
Working Threads 5 Threads with actual spans
Thread Utilization 62.7% 🆗 Average
Total Spans 1,589,847 All B/E + X events
Meaningful Spans (>= 10us) 514,151 (32.3% of total)
Tracing Noise (< 10us) 1,075,696 (67.7% of total)

Note on Thread Work: Thread work is computed by merging overlapping intervals
per thread, eliminating double-counting from nested spans. This gives the true
wall-clock busy time across all threads.

Workload Distribution by Tier

Category Tasks Total Time (ms) % of Thread Work
P0: Runtime/Resolution 0 0.0 0.0%
P1: I/O & Heavy Tasks 38,551 3,596.3 14.3%
P3: Asset Pipeline 29,654 3,315.6 13.2%
P4: Bridge/Interop 0 0.0 0.0%
Other 445,946 21,194.2 84.5%

Note: Percentages may sum to >100% because task durations include nesting
while thread work is de-duplicated. This is intentional for hotspot attribution.


Parallelization Analysis

Thread Utilization

Metric Value
Working Threads 5
Total Thread Work (de-duped) 25,068.1 ms
Avg Work per Thread 5,013.6 ms
Effective Parallelism 3.14x
Thread Utilization 62.7%

Assessment: With 5 working threads, achieving 3.1x parallelism indicates significant loss of potential parallelism.


Top 20 Tasks by Total Duration

Total (ms) Count Avg (us) Max (ms) % Work Task Name
7,855.4 184,013 42.7 11.4 31.3% module
4,016.7 70,153 57.3 254.9 16.0% process module
3,506.5 35,663 98.3 254.8 14.0% analyze ecmascript module
2,847.5 24,632 115.6 63.7 11.4% code generation
1,776.8 62,053 28.6 8.2 7.1% internal resolving
1,742.3 58,214 29.9 8.7 7.0% resolving
1,362.4 30,540 44.6 21.5 5.4% precompute code generation
1,342.9 14,495 92.6 50.8 5.4% chunking
1,291.7 13,177 98.0 127.4 5.2% compute async module info
1,088.8 8,044 135.4 51.9 4.3% parse ecmascript
511.7 5,069 100.9 48.7 2.0% compute async chunks
296.9 1,936 153.4 17.0 1.2% generate source map
76.3 2,165 35.2 6.4 0.3% read file
67.9 607 111.9 17.9 0.3% compute binding usage info
62.6 1,873 33.4 10.8 0.2% collect mergeable modules
58.6 104 563.1 17.2 0.2% make production chunks
33.7 563 59.8 3.1 0.1% async reference
28.9 6 4818.3 13.8 0.1% compute merged modules
28.5 14 2038.2 9.8 0.1% apply effects
28.0 13 2154.4 9.8 0.1% write file

Deep Dive by Tier

Tier 1: Runtime & Resolution (P0)

Focus: Task scheduling and dependency resolution.

Metric Value Status
Total Scheduling Time 0.0 ms ✅ Normal
Resolution Hotspots 0 distinct task types Check Top Tasks

Potential P0 Issues:

  • Thread utilization at 62.7% suggests critical path serialization or lock contention.
  • 1,075,696 spans < 10us (67.7%) contribute to scheduler pressure.

Tier 2: Physical & Resource Barriers (P1)

Focus: Hardware utilization, I/O, and heavy monoliths.

Metric Value Status
I/O Work (Estimated) 3,596.3 ms ✅ Healthy
Large Tasks (> 100ms) 3 Minimal

Tier 3: Architecture & Asset Pipeline (P2-P3)

Focus: Global state and transformation pipeline.

Metric Value Status
Asset Processing (P3) 3,315.6 ms 13.2% of work
Bridge Overhead (P4) 0.0 ms ✅ Low

Duration Distribution

Range Count Percentage
< 10us (noise) 1,075,696 67.7%
10us - 100us 488,764 30.7%
100us - 1ms 21,302 1.3%
1ms - 10ms 4,002 0.3%
10ms - 100ms 80 0.0%
> 100ms 3 0.0%

Diagnostic Signal Summary

Signal Status Finding
Tracing Noise (P0) ⚠️ Significant 67.7% of spans < 10us
Thread Utilization (P0) 🆗 Average 62.7% utilization
Heavy Monoliths (P1) ✅ Minimal 3 tasks > 100ms
Asset Pipeline (P3) Review 3,315.6 ms total
Bridge/Interop (P4) Low 0.0 ms total

Action Items (P0-P4)

  1. [P0] Profile lock contention to address 37% lost parallelism
  2. [P1] Breakdown heavy monolith tasks (>100ms) to improve granularity
  3. [P1] Review I/O patterns for potential batching opportunities
  4. [P3] Optimize asset transformation pipeline hot-spots
  5. [P4] Reduce "chatty" bridge operations if interop overhead is significant

Report generated by Utoopack Performance Analysis Agent on 2026-02-13
Following: Utoopack Performance Analysis Agent Protocol

@xusd320 xusd320 merged commit 055a90a into next Feb 13, 2026
16 checks passed
@xusd320 xusd320 deleted the fix/client-chunking-root branch February 13, 2026 10:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant