Conversation
Delete LegacyProjectMigrator and all .fsproj backward compatibility code. When users attempt to open an old .fsproj project, a user-facing alert is shown instead of attempting migration. Remove legacy CodingKeys from RecentProjectInfo and legacy file extension constant. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…r, and keystroke tracks, updating UI and rendering logic.
…sistency across the codebase.
…k color definitions, and consolidate timeline subviews.
- Remove segment ID display from inspector panel - Replace horizontal zoom sliders with vertical layout showing percentage label and numeric text field input - Add start/end center position pickers (CenterPointPicker) with X/Y numeric inputs for camera segments - Extract segment binding helpers into extension to keep struct body under lint limit Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… properties - Delete .fsproj file format support (open/drop handlers, error cases) - Remove legacy v2 package creation (createPackage method) and data loading (MouseRecordingAdapter, MouseDataConverter legacy convert functions) - Add uiStateSamples to event streams (PolyUIStateEvent, uistates-0.json) so recording.mouse.json is no longer needed for UI state data - Add PackageManager.createPackageFromVideo for video-only imports - Delete deprecated centerX/centerY properties on TransformKeyframe, TransformValue, CursorStyleKeyframe, TransformState; migrate all callers to use center.x/center.y with NormalizedPoint - Remove legacy Codable fallbacks in MediaAsset and MouseRecording - Delete dead code (MouseRecording.cleaned() extension) - Update CLAUDE.md and AGENTS.md documentation 25 files changed, -356 net lines removed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rove interpolation Remove position from CursorSegment and CursorStyleKeyframe so segments only manage style, scale, and visibility. Cursor position is now always derived from raw mouse data. Add useSmoothCursor toggle to CursorTrackV2 with a checkbox in the timeline track header. Improve MousePositionInterpolator with Gaussian pre-filter, boundary reflection, and idle stabilization. Delete CursorInterpolationGenerator and MouseDataCleaner (no longer needed). Simplify ClickCursorGenerator to produce style-only keyframes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rop information in project creation.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughRefactors the codebase from a keyframe-centric model to a segment-centric pipeline, removes legacy .fsproj migration and ripple/zoom/keyframe subsystems, and introduces a V2 generation pipeline (EventTimeline → IntentClassifier → Scene/Shot/Transition planning → Camera simulation → Post‑processing → Emission) plus tests and project reorganization. Changes
Sequence Diagram(s)sequenceDiagram
participant Mouse as MouseDataSource
participant UI as UIStateSamples
participant Timeline as EventTimeline
participant Classifier as IntentClassifier
participant Segmenter as SceneSegmenter
participant Planner as ShotPlanner
participant Simulator as CameraSimulator
participant Post as PostProcessing
participant Emitter as Emitters
Mouse->>Timeline: build(from mouseData, uiStateSamples)
UI->>Timeline: include UI state events
Timeline->>Classifier: classify(timeline, uiStateSamples)
Classifier->>Segmenter: intent spans
Segmenter->>Planner: scenes
Planner->>Simulator: shot plans + transitions
Simulator->>Post: simulated path (scenes + transitions)
Post->>Emitter: refined path
Emitter->>App: GeneratedTimeline (cameraTrack, cursorTrack, keystrokeTrack)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
Summary of ChangesHello @syi0808, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant architectural shift in how timeline effects are managed, transitioning from individual keyframes to a more robust segment-based system for camera, cursor, and keystroke animations. This refactoring aims to improve the flexibility and maintainability of the timeline. Concurrently, the ripple effect feature has been deprecated and removed, simplifying the application's scope. The project file format has been updated, ending support for legacy project files, and new documentation has been added to guide autonomous development within the repository. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a major architectural refactoring, migrating the timeline from a keyframe-based model to a more flexible segment-based model. This is a significant and positive change that affects the core data structures, rendering pipeline, and UI components. As part of this, the 'Ripple' effect feature has been removed, and support for legacy .fsproj project files has been dropped in favor of a new project version. The changes are extensive but appear to be well-executed and consistent across the codebase. I've found one minor issue in the new documentation file that should be addressed.
| - All generated logs/summaries/tags for vector DB workflows must be in English. | ||
|
|
||
| ## Build, Lint, and Test Commands | ||
| Run all commands from repository root (`/Users/sung-yein/Workspace/screenize`). |
There was a problem hiding this comment.
The repository root path is hardcoded to a user-specific directory. This should be generalized to be useful for any contributor. Using a placeholder like <repository-root> or simply stating "from the repository root" would be more appropriate.
| Run all commands from repository root (`/Users/sung-yein/Workspace/screenize`). | |
| Run all commands from the repository root. |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
Screenize/Views/Inspector/SettingsInspector.swift (1)
132-149:⚠️ Potential issue | 🟡 MinorLGTM — cursor scale read/write migrated to segment-based model.
The getter reads from the first segment (representative value), and the setter broadcasts the new scale to all segments. This is a clean pattern for a global cursor-scale setting.
The
TextField(lines 108–115) doesn't clamp the input to the slider's0.5...5.0range. SinceCursorSegment.scaleis a plain property without validation, a user can type an out-of-range value (e.g., 10.0 or 0.1) that gets accepted and stored, potentially producing unexpected scaling behavior. Add validation to either the TextField binding or theCursorSegment.scaleproperty to enforce the valid range.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/SettingsInspector.swift` around lines 132 - 149, The TextField currently allows values outside the intended 0.5...5.0 range; clamp/validate the input so CursorSegment.scale cannot be set out-of-range. Update the TextField binding (where you read/write cursorScale) to coerce new values into the 0.5...5.0 closed range before calling updateCursorScale, or add validation in updateCursorScale (or on CursorSegment.scale setter) to clamp newValue to min 0.5 / max 5.0; reference the cursorScale computed property, updateCursorScale(_:) method, and CursorSegment.scale when making this change.Screenize/App/AppState.swift (2)
371-374:⚠️ Potential issue | 🟡 MinorPre-existing: scale factor computation is a no-op.
CGFloat(display.width) / CGFloat(display.width) * 2.0always evaluates to2.0. If the intent was to derive the actual scale factor from the display's pixel density, this needs a real computation (e.g., usingNSScreen.backingScaleFactor).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/App/AppState.swift` around lines 371 - 374, The scaleFactor calculation in the CaptureMeta initializer is a no-op (CGFloat(display.width) / CGFloat(display.width) * 2.0 == 2.0); replace it with a real backing scale computation: obtain the display/screen's backingScaleFactor (e.g., via NSScreen.backingScaleFactor or by computing pixelWidth/pointWidth if the display struct exposes both) and set captureMeta = CaptureMeta(..., scaleFactor: actualScale) so CaptureMeta.scaleFactor reflects the true pixel density for the display (use NSScreen.main?.backingScaleFactor ?? 1.0 as a safe fallback or map the display to the correct NSScreen if multiple screens are present).
404-426:⚠️ Potential issue | 🟠 MajorRemove or fix
AppState.createProject— method appears unused and contains bugs.
AppState.createProjectis not called anywhere in the codebase (project creation usesProjectCreator.createFromRecording/createFromVideoinstead), yet it contains two issues:
Line 373 scale factor calculation is redundant:
CGFloat(display.width) / CGFloat(display.width) * 2.0always evaluates to2.0(width÷width=1.0, then ×2.0). The comment "Retina" suggests this was meant to compute a real scale factor. Either compute the intended value or remove the redundant division.Empty segments inconsistency: Unlike
ProjectCreator.createDefaultTimeline, which initializes camera and cursor tracks with full-duration segments[0, duration], this method creates empty segments. If this method is ever used, users would get projects with no default camera/cursor behavior.Either remove this dead code or fix both issues and clarify its intended use.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/App/AppState.swift` around lines 404 - 426, AppState.createProject is dead or buggy: either delete it or fix the two issues — replace the bogus Retina scale calc and initialize default segments to full duration like ProjectCreator.createDefaultTimeline. If keeping it, compute a real scale factor (e.g., use NSScreen/main display scale or videoMetadata.scale instead of CGFloat(display.width)/CGFloat(display.width)*2.0) in createProject, and create CameraTrack and CursorTrackV2 segments covering [0, videoMetadata.duration] (not empty arrays) so behavior matches ProjectCreator.createDefaultTimeline; otherwise remove the entire createProject method to avoid unused, error-prone code.Screenize/ViewModels/EditorViewModel.swift (1)
554-576:⚠️ Potential issue | 🟡 Minor
addKeystrokeSegmentredundantly setshasUnsavedChangesand callsinvalidatePreviewCache.Lines 574–575 set
hasUnsavedChanges = trueand callinvalidatePreviewCache(), but the calling methodaddSegment(to:at:)(lines 502–503) already does the same.addTransformSegmentandaddCursorSegmentdon't duplicate this — onlyaddKeystrokeSegmentdoes.Remove redundant calls
track.addSegment(newSegment) project.timeline.tracks[trackIndex] = .keystroke(track) selectedSegmentID = newSegment.id selectedSegmentTrackType = .keystroke - hasUnsavedChanges = true - invalidatePreviewCache() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/ViewModels/EditorViewModel.swift` around lines 554 - 576, The addKeystrokeSegment function duplicates work by setting hasUnsavedChanges = true and calling invalidatePreviewCache(), which is already handled by its caller addSegment(to:at:); remove those two lines from addKeystrokeSegment (leaving selection updates and track mutation intact) so its behavior matches addTransformSegment and addCursorSegment and relies on addSegment(to:at:) to mark unsaved changes and invalidate the preview cache.
🧹 Nitpick comments (17)
Screenize/Project/ScreenizeProject.swift (1)
27-28: Stale comment references old version scheme.The comment mentions "v4" and "v2 projects" but the project version is now 5. Consider updating this to reflect the current versioning.
📝 Suggested comment update
- // v4: polyrecorder interop block (nil for v2 projects) + // Polyrecorder interop block (nil for legacy projects without event streams) var interop: InteropBlock?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Project/ScreenizeProject.swift` around lines 27 - 28, Update the stale inline comment for the InteropBlock variable: replace "v4: polyrecorder interop block (nil for v2 projects)" with wording that reflects the current versioning (e.g., mention v5 and which older versions may be nil). Locate the declaration of var interop: InteropBlock? and update the comment to reference the current project version (v5) and the appropriate legacy versions rather than v4/v2 so the comment stays accurate.Screenize/Render/MouseDataConverter.swift (1)
57-66: Flushed pending-down events are not sorted by timestamp.Dictionary iteration order is unspecified, so the flushed
RenderClickEvententries may appear out of chronological order in theclicksarray. If downstream consumers expect sorted clicks, consider sorting the final array.Proposed fix
for pending in pendingDownByType.values { let clickType: ClickType = (pending.clickType == .leftDown) ? .left : .right clicks.append(RenderClickEvent( timestamp: pending.time, duration: 0.1, x: pending.position.x, y: pending.position.y, clickType: clickType )) } + clicks.sort { $0.timestamp < $1.timestamp } + return (positions, clicks)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/MouseDataConverter.swift` around lines 57 - 66, The loop that flushes pendingDownByType into clicks can produce out-of-order timestamps because Dictionary iteration is unordered; update the flush to guarantee chronological order by collecting the pending values and sorting them by their timestamp (pending.time) before creating RenderClickEvent entries, or alternatively sort the final clicks array by timestamp after appending; reference pendingDownByType, pending.time, RenderClickEvent, and clicks when making the change.Screenize/Project/ProjectCreator.swift (1)
204-213: Stale naming:autoGenerateKeyframesstill references "keyframes".Given the broader rename from keyframes to segments, this property and its doc comment should be updated to
autoGenerateSegments(or similar) to stay consistent with the new model.Proposed rename
- /// Auto-run smart generators - var autoGenerateKeyframes: Bool = false + /// Auto-run smart generators + var autoGenerateSegments: Bool = false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Project/ProjectCreator.swift` around lines 204 - 213, Rename the stale property and its doc comment from autoGenerateKeyframes to autoGenerateSegments and update any references to match the new terminology; specifically change the declaration var autoGenerateKeyframes: Bool = false and its comment "Auto-run smart generators" to use autoGenerateSegments, and ensure any usages, tests, and external callers referencing autoGenerateKeyframes are updated accordingly; leave the AutoGeneratorType enum (cases clickZoom, cursorSmooth) unchanged unless those names also require terminology updates.Screenize/Timeline/Timeline.swift (1)
68-117: Setting a track accessor tonilis silently ignored — document or handle explicitly.The
cameraTrack,cursorTrackV2, andkeystrokeTrackV2setters useguard let newValue else { return }, meaning assigningnildoes nothing. Since the getters returnOptional, callers might expecttimeline.cameraTrack = nilto remove the camera track. If removal isn't intended, consider making the setters non-optional or adding a comment explaining the design.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Timeline/Timeline.swift` around lines 68 - 117, The setters for cameraTrack, cursorTrackV2, and keystrokeTrackV2 currently ignore assignments of nil (guard let newValue else { return }) so callers cannot remove tracks; update each setter to treat newValue == nil as a removal: when nil, find the index in tracks where $0.trackType == .camera / .cursor / .keystroke and remove(at: index) if present, otherwise when non-nil keep the existing logic (replace at index or insert/append). Use the existing symbols cameraTrack, cursorTrackV2, keystrokeTrackV2, tracks, and trackType to locate and change the setter behavior.Screenize/Timeline/Track.swift (2)
454-499: Significant code duplication betweenCameraTrackandCursorTrackV2.
addSegment,updateSegment,removeSegment,activeSegment, andhasOverlapare nearly identical (differing only in the segment type). Consider extracting the overlap-guarded logic into a generic helper or protocol extension to avoid maintaining the same code in two places.Also applies to: 501-546
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Timeline/Track.swift` around lines 454 - 499, CameraTrack duplicates overlap-guarded segment management logic found in CursorTrackV2 (addSegment, updateSegment, removeSegment, activeSegment, hasOverlap); extract a generic solution: define a protocol (e.g., SegmentProtocol with id, startTime, endTime) and a TrackManaging extension or helper generic type (e.g., SegmentTrackManager<T: SegmentProtocol>) that implements addSegment(_:), updateSegment(_:), removeSegment(id:), activeSegment(at:), and hasOverlap(_:excluding:), then make CameraTrack and CursorTrackV2 use that shared implementation by delegating to the manager or conforming to a protocol extension so the duplicate implementations (addSegment, updateSegment, removeSegment, activeSegment, hasOverlap) are removed from both types.
14-55: Add TODO comments or defermode,cursorFollow, andtransitionToNextonCameraSegment(andclickFeedbackonCursorSegment) if not actively used inFrameEvaluator.These properties are stored and serialized but never read during frame evaluation. If they're placeholders for planned features, add clarifying TODO comments (e.g.,
// TODO: implement transition blending). Otherwise, consider deferring them to reduce dead data in the model.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Timeline/Track.swift` around lines 14 - 55, CameraSegment's stored properties mode, cursorFollow, and transitionToNext (and CursorSegment's clickFeedback) are serialized but never read by FrameEvaluator; either annotate them with clear TODO comments noting they're placeholders for future features (e.g., "// TODO: implement transition blending / cursor follow in FrameEvaluator") or change their types to be deferred/optional so they don't force dead data into the model (e.g., make mode?/cursorFollow?/transitionToNext? and clickFeedback?); update the declarations in CameraSegment and CursorSegment and add a brief note in FrameEvaluator referencing these symbols so reviewers know why they're unused now.Screenize/Views/Timeline/TimelineView.swift (4)
382-398:DragGesture(minimumDistance: 0)on the segment block may swallow the.onTapGesture.The move gesture uses
minimumDistance: 0, so it firesonChangedimmediately on touch-down. The.onTapGestureon line 394 is a simultaneous gesture on the same view. In practice, both the drag'sonEnded(which sets selection at line 456-458) and theonTapGesture(which sets selection at line 395-397) will fire, producing a redundant double-selection on simple taps. This is harmless but could cause unexpected side effects ifonSegmentSelectever performs non-idempotent work.Consider either removing the
.onTapGesture(sincemoveGesture.onEndedalready handles selection) or increasingminimumDistanceto a small value (e.g., 2) to distinguish taps from drags.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 382 - 398, The .onTapGesture on the segment view conflicts with the move gesture (moveGesture) because the DragGesture uses minimumDistance: 0 and triggers onChanged/onEnded, causing duplicate selection calls to onSegmentSelect and setting selectedSegmentID/selectedSegmentTrackType; fix by either removing the .onTapGesture block entirely (rely on moveGesture's onEnded to call onSegmentSelect and set selectedSegmentID/selectedSegmentTrackType) or modify the DragGesture inside moveGesture to use a small minimumDistance (e.g., 2) so simple taps are not interpreted as drags, and ensure only one code path updates onSegmentSelect, selectedSegmentID and selectedSegmentTrackType.
258-307: Duplicated segment-range mapping across all three track types.The
segmentBlocks(for:)function repeats the same mapping pattern (buildSegmentRangearray, iterate segments, callsegmentBlock) for.camera,.cursor, and.keystroke. This could be consolidated via a protocol extension or generic helper that maps any collection of segments withid/startTime/endTimeinto the shared block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 258 - 307, The switch in segmentBlocks(for:) duplicates the same mapping/ForEach for camera/cursor/keystroke; extract a generic helper to avoid repetition. Define a small protocol (e.g., SegmentRepresentable) exposing id, startTime, endTime and make the camera/cursor/keystroke segment types conform, then implement a helper method like renderSegmentBlocks<S: Collection & ...>(segments: S, trackType: TrackType) that builds the [SegmentRange] from segments, computes snapTargets(from:excluding:) and editBounds(from:excluding:currentStart:currentEnd:) and invokes segmentBlock(...) inside a ForEach; finally call that helper from segmentBlocks(for:) for each case (passing cameraTrack.segments, cursorTrack.segments, keystrokeTrack.segments and the corresponding trackType) so the mapping logic is centralized and duplicated code removed.
640-664:NSCursorpush/pop via hover count is fragile.If SwiftUI delivers mismatched
onHoverevents (e.g., view removed mid-hover, rapid mouse movement across handles), the cursor stack can get out of sync. TheresetResizeCursorIfNeededinonDisappearmitigates this, but mid-session imbalance is still possible.A safer approach is to set
NSCursor.resizeLeftRightdirectly on hover enter andNSCursor.arrowon hover exit (or useNSTrackingArea/.cursor()modifier) rather than managing a push/pop stack manually.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 640 - 664, Replace the fragile push/pop stack logic in updateResizeCursor(_:) and resetResizeCursorIfNeeded() by setting the cursor directly on enter/exit: on hover enter call NSCursor.resizeLeftRight.set(), on hover exit call NSCursor.arrow.set(), and remove the resizeCursorHoverCount and any NSCursor.push()/pop() usage; update any callers of updateResizeCursor and onDisappear cleanup to stop relying on resetResizeCursorIfNeeded so the cursor state is deterministic even if hover events are mismatched.
309-408:segmentBlockexceeds 80-line function body limit (87 lines).Static analysis (SwiftLint
function_body_length) flags this. Consider extracting the handle rendering into a helper view or function to reduce the method's cognitive load and satisfy the lint rule.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 309 - 408, segmentBlock is too long; extract the repeated left/right handle UI into a small helper (e.g., SegmentHandle view or makeSegmentHandle(...) function) that encapsulates the Rectangle styling, .contentShape, .onHover { updateResizeCursor(...) } and the highPriorityGesture(resizeGesture(...)) call (pass id, trackType, start, end, mode: .resizeStart/.resizeEnd, snapTargets, editBounds, and read selectedSegmentID for styling). Replace the two inline Rectangle handle blocks in segmentBlock with SegmentHandle(...) instances and keep segmentBlock responsible only for layout, calling moveGesture(...) and onSegmentSelect as it does now; this will reduce function_body_length and remove duplication while preserving resizeGesture, moveGesture, onSegmentSelect, selectedSegmentID, and updateResizeCursor usage.Screenize/Render/EffectCompositor.swift (1)
52-55: Debug print statement left in production code.This fires only on the first cursor render, so impact is minimal, but consider gating it behind a
#if DEBUGflag or removing it before release.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/EffectCompositor.swift` around lines 52 - 55, Remove the stray debug print left in EffectCompositor.renderCursor: either delete the print(...) line or wrap it in a compile-time debug guard (e.g. `#if` DEBUG ... `#endif`) so it never executes in production; keep the existing debugCursorRenderCount increment and ensure the log only appears under DEBUG builds to preserve current behavior during development.Screenize/Render/FrameEvaluator.swift (1)
337-371: Linear scan over all click events on every frame evaluation.
computeClickScaleModifieriterates the fullclickEventsarray per frame. For long recordings with many clicks, this is O(n) per frame and O(n·m) for batch evaluation. SinceclickEventsare sorted by timestamp, a binary search to narrow the window (only events wheredownTime <= time <= upTime + settleDuration) would reduce this to O(log n + k).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/FrameEvaluator.swift` around lines 337 - 371, computeClickScaleModifier currently scans all clickEvents each frame causing O(n) work; since clickEvents are time-sorted, replace the full linear scan with a binary search to find the first candidate click whose downTime could affect the given time (accounting for settleDuration) and then iterate forward only while downTime <= time and upTime + settleDuration >= time; specifically, implement a lower-bound search on click.timestamp to find the earliest index where click.endTimestamp + settleDuration >= time (or where click.timestamp <= time <= click.endTimestamp + settleDuration) and then evaluate clickModifier for that small window instead of every click, using the existing variables pressDuration, settleDuration, pressedScale and the same easeOutQuad logic in computeClickScaleModifier to preserve behaviour.Screenize/ViewModels/EditorViewModel.swift (1)
481-481: Stale MARK comment: "Keyframe Management" should be "Segment Management".Several doc comments and MARK headers still reference "keyframe" terminology (line 253:
autoGenerateKeyframes, line 274: "Auto-generate keyframes", line 396: "keyframe tracks", line 481: "Keyframe Management"). These are inconsistent with the segment-based refactor.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/ViewModels/EditorViewModel.swift` at line 481, Update stale "keyframe" terminology to "segment" across this file: rename MARK "Keyframe Management" to "Segment Management", update the property autoGenerateKeyframes to use segment wording in its doc comment (e.g., "Auto-generate segments"), and replace any other comments or docstrings that mention "keyframe", "Auto-generate keyframes", or "keyframe tracks" with "segment", "Auto-generate segments", or "segment tracks". Search for the identifier autoGenerateKeyframes and any comment blocks around methods that manipulate tracks and update their wording to be consistent with the segment-based refactor.Screenize/Views/EditorMainView.swift (1)
276-300: Residual "keyframe" terminology in property names, method names, and user-facing text.
- Line 276:
// MARK: - Keyframe Add Menu- Line 278:
keyframeAddMenuproperty name- Line 460/498:
generateKeyframes()method name- Line 476: User-facing string
"Selected types replace existing keyframes. Unselected types are preserved."— this is visible to users and should read "segments."Rename residual references
- // MARK: - Keyframe Add Menu + // MARK: - Segment Add Menu - private var keyframeAddMenu: some View { + private var segmentAddMenu: some View {- Text("Selected types replace existing keyframes. Unselected types are preserved.") + Text("Selected types replace existing segments. Unselected types are preserved.")- private func generateKeyframes() async { + private func generateSegments() async {Also update the call site on line 460:
- Task { await generateKeyframes() } + Task { await generateSegments() }And the toolbar reference on line 226:
- keyframeAddMenu + segmentAddMenuAlso applies to: 460-460, 476-476, 498-498
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/EditorMainView.swift` around lines 276 - 300, The code still uses "keyframe" terminology; rename the MARK comment and the property keyframeAddMenu to use "segment" (e.g., // MARK: - Segment Add Menu and segmentAddMenu), rename the generateKeyframes() method to generateSegments() and update all call sites (including the toolbar reference that invokes it), and change the user-facing string "Selected types replace existing keyframes. Unselected types are preserved." to use "segments" instead; ensure you update any identifiers, labels, and comments consistently (keyframeAddMenu -> segmentAddMenu, generateKeyframes() -> generateSegments(), MARK and UILabel text) so compilation and UX reflect "segments" everywhere.Screenize/Views/Inspector/InspectorView.swift (3)
104-141: Camera section bindings create individualBindingwrappers for each field — verbose but functional.The nested
Binding(get:set:)wrappers for start/end time and zoom are quite verbose. This is standard SwiftUI for decomposing a struct binding into field-level bindings, but if this pattern repeats (and it does across camera, cursor, keystroke sections), a small generic helper could reduce boilerplate. Low priority.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/InspectorView.swift` around lines 104 - 141, The cameraSection creates many per-field Binding(get:set:) wrappers for cameraSegmentBinding fields (startTime, endTime, startTransform.zoom, endTransform.zoom) — reduce boilerplate by adding a small generic helper (e.g., a Binding extension or helper function like binding(for:_, keyPath:)) that derives a field Binding from the segment Binding using a WritableKeyPath; replace the inline Binding(get:set:) calls in cameraSection (and similarly in cursor/keystroke sections) with calls to that helper so you can bind startTime, endTime, startTransform, endTransform and then further access zoom via key paths or map TransformValue, keeping references to cameraSegmentBinding, timeRangeFields, TransformValue, startTransform, endTransform, startTime and endTime to locate the usage sites.
254-278: CapturedtrackIndexin binding closures could reference the wrong track if the array mutates between body evaluations.
trackIndexis captured by value when the binding is created. If a track is inserted or removed before this index between the binding's creation and its later invocation (e.g., a slider drag during a concurrent timeline mutation), theget/setclosures would read/write the wrong track. In practice SwiftUI re-evaluates the body on@Bindingchanges, so the window for staleness is small, but the pattern is fragile.A safer alternative is to look up the track by type inside the closure rather than capturing the index.
Sketch for cameraSegmentBinding (apply same to cursor/keystroke)
return Binding( get: { - if case .camera(let track) = self.timeline.tracks[trackIndex], + guard let idx = self.timeline.tracks.firstIndex(where: { $0.trackType == .transform }), + case .camera(let track) = self.timeline.tracks[idx], - let segmentIndex = track.segments.firstIndex(where: { $0.id == id }) { + let segmentIndex = track.segments.firstIndex(where: { $0.id == id }) else { + return CameraSegment(startTime: 0, endTime: 1, startTransform: .identity, endTransform: .identity) + } - return track.segments[segmentIndex] + return track.segments[segmentIndex] - } - - return CameraSegment(startTime: 0, endTime: 1, startTransform: .identity, endTransform: .identity) }, set: { updated in - if case .camera(var track) = self.timeline.tracks[trackIndex], + guard let idx = self.timeline.tracks.firstIndex(where: { $0.trackType == .transform }), + case .camera(var track) = self.timeline.tracks[idx], - track.updateSegment(updated) { - self.timeline.tracks[trackIndex] = .camera(track) + track.updateSegment(updated) else { return } + self.timeline.tracks[idx] = .camera(track) - self.onSegmentChange?() + self.onSegmentChange?() - } } )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/InspectorView.swift` around lines 254 - 278, cameraSegmentBinding currently captures trackIndex which can become stale if timeline.tracks mutates; change the get/set closures to re-resolve the transform track each time by searching timeline.tracks for trackType == .transform (e.g., using firstIndex(where:) or pattern-matching to find the .camera case) and then locate the segment by id inside those closures instead of using the outer trackIndex; ensure set replaces the correct track by reconstructing the .camera(track) enum case when updating and call onSegmentChange(), and apply the same pattern to the cursor/keystroke binding helpers.
234-252:timeRangeFieldshas no input validation or clamping.The start/end
TextFieldvalues accept arbitraryDoubleinput. A user could enter negative times or values exceeding the media duration. The underlyingupdateSegmentmethods will reject overlapping segments but won't clamp out-of-bounds values — those will persist in the segment data until the next validation point (updateSegmentTimeRangein the ViewModel clamps, but that path isn't used from the inspector).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/InspectorView.swift` around lines 234 - 252, timeRangeFields currently lets users enter arbitrary Double values; add immediate input validation and clamping inside timeRangeFields so the start and end bindings are constrained to [0, mediaDuration] and to each other (start <= end). Modify the view to accept or capture the media duration (e.g., add a duration parameter or reference the ViewModel) and add onChange handlers to the two TextFields that clamp the changed value with min/max (0 and duration) and enforce ordering by adjusting the other binding if needed; ensure these handlers call the existing updateSegmentTimeRange/updateSegment (or the ViewModel method used to persist changes) after clamping so the model stays consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 111: Update AGENTS.md to reflect the post-migration naming and removed
legacy support: replace the phrase "keyframe/timeline terms" with "segment-based
naming" (or "segments"), change "Keep timeline keyframes sorted by time within
tracks" to "Keep segments sorted by start time within tracks" (or equivalent
segment-sorting guidance), and remove or replace "Respect existing migration
path for legacy `.fsproj` handling" with a note that `.fsproj` support has been
removed and no migration path is provided; ensure the three phrases mentioned
are edited to use "segment" terminology and the `.fsproj` sentence is deleted or
rewritten accordingly.
- Line 23: Replace the developer-specific absolute path
`/Users/sung-yein/Workspace/screenize` in AGENTS.md with a generic
repository-root reference (e.g., "repository root", `$(pwd)`, or `$(git
rev-parse --show-toplevel)`) or a placeholder like `<repo-root>` so instructions
work across machines and CI; update the sentence "Run all commands from
repository root (`/Users/sung-yein/Workspace/screenize`)." to use the chosen
generic form.
In `@Screenize/Project/PackageManager.swift`:
- Around line 217-219: The guard currently hard-fails on any project.version !=
5 (guard project.version == 5) which will block v4 users with no migration path;
update the check/throw to either (A) accept older versions and invoke a migrator
if you intend to support v4→v5 (e.g., adjust the condition and call the migrator
function), or (B) if v4 is intentionally unsupported, replace the thrown
PackageManagerError.unsupportedProjectVersion(project.version) with a more
descriptive error that lists supported versions (e.g., "supported: 5") and
provides migration guidance or a link, using the same symbols (the guard in
PackageManager.swift and PackageManagerError.unsupportedProjectVersion) so
callers see actionable instructions instead of a generic unsupported-version
message.
In `@Screenize/Render/FrameEvaluator.swift`:
- Around line 88-90: Clamp rawProgress to the [0, 1] range before passing it to
segment.interpolation.apply to avoid floating-point edge cases producing values
slightly outside the valid range; locate the computation of rawProgress in
FrameEvaluator (where duration is computed and rawProgress is defined) and
replace the direct rawProgress usage with a clamped value (e.g., using
min(max(rawProgress, 0), 1)) so progress =
segment.interpolation.apply(clampedRawProgress, duration: CGFloat(duration)).
In `@Screenize/Render/MouseDataConverter.swift`:
- Around line 26-54: The loop handling sortedClicks currently overwrites an
existing pending down in pendingDownByType when a second .leftDown/.rightDown
arrives, losing the first click; modify the branch that handles .leftDown and
.rightDown (in the for loop over sortedClicks in MouseDataConverter) to detect
an existing pending down for that ClickEventData.ClickType, emit a
RenderClickEvent for the previous down with a safe minimal duration/coordinates
(mirror how .leftUp/.rightUp builds RenderClickEvent), then replace the
pendingDownByType entry with the new down so the latest is stored; ensure you
use the same ClickType mapping (leftDown→.left, rightDown→.right) and
remove/replace the pending entry consistently.
In `@Screenize/ViewModels/EditorViewModel.swift`:
- Around line 515-523: The code currently discards the Bool result of
track.addSegment when creating CameraSegment/Cursor segments, then
unconditionally sets selectedSegmentID to the new segment's id, which can leave
selection pointing at a non-existent segment if addSegment returns false due to
overlap; update the logic in the EditorViewModel methods that create segments
(the CameraSegment and Cursor segment creation blocks that call
track.addSegment) to check the returned Bool and only assign selectedSegmentID
and perform further selection-related work when addSegment returns true,
otherwise skip the selection update and optionally early-return or surface an
error; specifically guard the result of CameraTrack.addSegment/
CursorTrackV2.addSegment before setting selectedSegmentID (and before any code
that assumes the segment was added).
- Around line 506-528: In addTransformSegment, clamp the start time and ensure
the segment end never exceeds duration: compute a clampedStart = min(time,
max(0, duration - 0.05)) (so startTime never sits at or past duration), compute
candidateEnd = min(duration, clampedStart + 1.0), then set endTime =
max(clampedStart + 0.05, candidateEnd) (or simply endTime = min(duration,
max(clampedStart + 0.05, candidateEnd))) before creating
CameraSegment(startTime: clampedStart, endTime: endTime). Update references to
startTime/endTime used in CameraSegment creation and keep the rest of the logic
(track lookup, track.addSegment, selectedSegmentID, selectedSegmentTrackType)
unchanged.
- Around line 639-704: The undo snapshot is currently pushed before validating
the update (saveUndoSnapshot() is called before track.updateSegment(segment)),
which creates a no-op undo if updateSegment returns false; move the
saveUndoSnapshot() call so it runs only after track.updateSegment(segment)
returns true (i.e., call saveUndoSnapshot() immediately after the guard that
verifies track.updateSegment(segment) and before assigning
project.timeline.tracks[trackIndex] = .camera/.cursor/.keystroke(track) and
setting hasUnsavedChanges/invalidatePreviewCache), and apply this same reorder
in the .camera, .cursor and .keystroke branches of updateSegmentTimeRange.
In `@Screenize/Views/Inspector/PositionPicker.swift`:
- Around line 42-52: Guard against zero-size geometry in the DragGesture
handlers: before computing x and y in the .onChanged closure of the DragGesture,
check that size.width and size.height are > 0 and only perform value.location.x
/ size.width and value.location.y / size.height when valid; otherwise skip
updating (or set to a safe default like 0) to avoid producing NaN/inf. Update
the .onChanged block around the DragGesture (where isDragging, x, y, and
onChange are used) to perform these checks and still call onChange?() only when
a real update to x/y occurs.
---
Outside diff comments:
In `@Screenize/App/AppState.swift`:
- Around line 371-374: The scaleFactor calculation in the CaptureMeta
initializer is a no-op (CGFloat(display.width) / CGFloat(display.width) * 2.0 ==
2.0); replace it with a real backing scale computation: obtain the
display/screen's backingScaleFactor (e.g., via NSScreen.backingScaleFactor or by
computing pixelWidth/pointWidth if the display struct exposes both) and set
captureMeta = CaptureMeta(..., scaleFactor: actualScale) so
CaptureMeta.scaleFactor reflects the true pixel density for the display (use
NSScreen.main?.backingScaleFactor ?? 1.0 as a safe fallback or map the display
to the correct NSScreen if multiple screens are present).
- Around line 404-426: AppState.createProject is dead or buggy: either delete it
or fix the two issues — replace the bogus Retina scale calc and initialize
default segments to full duration like ProjectCreator.createDefaultTimeline. If
keeping it, compute a real scale factor (e.g., use NSScreen/main display scale
or videoMetadata.scale instead of
CGFloat(display.width)/CGFloat(display.width)*2.0) in createProject, and create
CameraTrack and CursorTrackV2 segments covering [0, videoMetadata.duration] (not
empty arrays) so behavior matches ProjectCreator.createDefaultTimeline;
otherwise remove the entire createProject method to avoid unused, error-prone
code.
In `@Screenize/ViewModels/EditorViewModel.swift`:
- Around line 554-576: The addKeystrokeSegment function duplicates work by
setting hasUnsavedChanges = true and calling invalidatePreviewCache(), which is
already handled by its caller addSegment(to:at:); remove those two lines from
addKeystrokeSegment (leaving selection updates and track mutation intact) so its
behavior matches addTransformSegment and addCursorSegment and relies on
addSegment(to:at:) to mark unsaved changes and invalidate the preview cache.
In `@Screenize/Views/Inspector/SettingsInspector.swift`:
- Around line 132-149: The TextField currently allows values outside the
intended 0.5...5.0 range; clamp/validate the input so CursorSegment.scale cannot
be set out-of-range. Update the TextField binding (where you read/write
cursorScale) to coerce new values into the 0.5...5.0 closed range before calling
updateCursorScale, or add validation in updateCursorScale (or on
CursorSegment.scale setter) to clamp newValue to min 0.5 / max 5.0; reference
the cursorScale computed property, updateCursorScale(_:) method, and
CursorSegment.scale when making this change.
---
Nitpick comments:
In `@Screenize/Project/ProjectCreator.swift`:
- Around line 204-213: Rename the stale property and its doc comment from
autoGenerateKeyframes to autoGenerateSegments and update any references to match
the new terminology; specifically change the declaration var
autoGenerateKeyframes: Bool = false and its comment "Auto-run smart generators"
to use autoGenerateSegments, and ensure any usages, tests, and external callers
referencing autoGenerateKeyframes are updated accordingly; leave the
AutoGeneratorType enum (cases clickZoom, cursorSmooth) unchanged unless those
names also require terminology updates.
In `@Screenize/Project/ScreenizeProject.swift`:
- Around line 27-28: Update the stale inline comment for the InteropBlock
variable: replace "v4: polyrecorder interop block (nil for v2 projects)" with
wording that reflects the current versioning (e.g., mention v5 and which older
versions may be nil). Locate the declaration of var interop: InteropBlock? and
update the comment to reference the current project version (v5) and the
appropriate legacy versions rather than v4/v2 so the comment stays accurate.
In `@Screenize/Render/EffectCompositor.swift`:
- Around line 52-55: Remove the stray debug print left in
EffectCompositor.renderCursor: either delete the print(...) line or wrap it in a
compile-time debug guard (e.g. `#if` DEBUG ... `#endif`) so it never executes in
production; keep the existing debugCursorRenderCount increment and ensure the
log only appears under DEBUG builds to preserve current behavior during
development.
In `@Screenize/Render/FrameEvaluator.swift`:
- Around line 337-371: computeClickScaleModifier currently scans all clickEvents
each frame causing O(n) work; since clickEvents are time-sorted, replace the
full linear scan with a binary search to find the first candidate click whose
downTime could affect the given time (accounting for settleDuration) and then
iterate forward only while downTime <= time and upTime + settleDuration >= time;
specifically, implement a lower-bound search on click.timestamp to find the
earliest index where click.endTimestamp + settleDuration >= time (or where
click.timestamp <= time <= click.endTimestamp + settleDuration) and then
evaluate clickModifier for that small window instead of every click, using the
existing variables pressDuration, settleDuration, pressedScale and the same
easeOutQuad logic in computeClickScaleModifier to preserve behaviour.
In `@Screenize/Render/MouseDataConverter.swift`:
- Around line 57-66: The loop that flushes pendingDownByType into clicks can
produce out-of-order timestamps because Dictionary iteration is unordered;
update the flush to guarantee chronological order by collecting the pending
values and sorting them by their timestamp (pending.time) before creating
RenderClickEvent entries, or alternatively sort the final clicks array by
timestamp after appending; reference pendingDownByType, pending.time,
RenderClickEvent, and clicks when making the change.
In `@Screenize/Timeline/Timeline.swift`:
- Around line 68-117: The setters for cameraTrack, cursorTrackV2, and
keystrokeTrackV2 currently ignore assignments of nil (guard let newValue else {
return }) so callers cannot remove tracks; update each setter to treat newValue
== nil as a removal: when nil, find the index in tracks where $0.trackType ==
.camera / .cursor / .keystroke and remove(at: index) if present, otherwise when
non-nil keep the existing logic (replace at index or insert/append). Use the
existing symbols cameraTrack, cursorTrackV2, keystrokeTrackV2, tracks, and
trackType to locate and change the setter behavior.
In `@Screenize/Timeline/Track.swift`:
- Around line 454-499: CameraTrack duplicates overlap-guarded segment management
logic found in CursorTrackV2 (addSegment, updateSegment, removeSegment,
activeSegment, hasOverlap); extract a generic solution: define a protocol (e.g.,
SegmentProtocol with id, startTime, endTime) and a TrackManaging extension or
helper generic type (e.g., SegmentTrackManager<T: SegmentProtocol>) that
implements addSegment(_:), updateSegment(_:), removeSegment(id:),
activeSegment(at:), and hasOverlap(_:excluding:), then make CameraTrack and
CursorTrackV2 use that shared implementation by delegating to the manager or
conforming to a protocol extension so the duplicate implementations (addSegment,
updateSegment, removeSegment, activeSegment, hasOverlap) are removed from both
types.
- Around line 14-55: CameraSegment's stored properties mode, cursorFollow, and
transitionToNext (and CursorSegment's clickFeedback) are serialized but never
read by FrameEvaluator; either annotate them with clear TODO comments noting
they're placeholders for future features (e.g., "// TODO: implement transition
blending / cursor follow in FrameEvaluator") or change their types to be
deferred/optional so they don't force dead data into the model (e.g., make
mode?/cursorFollow?/transitionToNext? and clickFeedback?); update the
declarations in CameraSegment and CursorSegment and add a brief note in
FrameEvaluator referencing these symbols so reviewers know why they're unused
now.
In `@Screenize/ViewModels/EditorViewModel.swift`:
- Line 481: Update stale "keyframe" terminology to "segment" across this file:
rename MARK "Keyframe Management" to "Segment Management", update the property
autoGenerateKeyframes to use segment wording in its doc comment (e.g.,
"Auto-generate segments"), and replace any other comments or docstrings that
mention "keyframe", "Auto-generate keyframes", or "keyframe tracks" with
"segment", "Auto-generate segments", or "segment tracks". Search for the
identifier autoGenerateKeyframes and any comment blocks around methods that
manipulate tracks and update their wording to be consistent with the
segment-based refactor.
In `@Screenize/Views/EditorMainView.swift`:
- Around line 276-300: The code still uses "keyframe" terminology; rename the
MARK comment and the property keyframeAddMenu to use "segment" (e.g., // MARK: -
Segment Add Menu and segmentAddMenu), rename the generateKeyframes() method to
generateSegments() and update all call sites (including the toolbar reference
that invokes it), and change the user-facing string "Selected types replace
existing keyframes. Unselected types are preserved." to use "segments" instead;
ensure you update any identifiers, labels, and comments consistently
(keyframeAddMenu -> segmentAddMenu, generateKeyframes() -> generateSegments(),
MARK and UILabel text) so compilation and UX reflect "segments" everywhere.
In `@Screenize/Views/Inspector/InspectorView.swift`:
- Around line 104-141: The cameraSection creates many per-field
Binding(get:set:) wrappers for cameraSegmentBinding fields (startTime, endTime,
startTransform.zoom, endTransform.zoom) — reduce boilerplate by adding a small
generic helper (e.g., a Binding extension or helper function like binding(for:_,
keyPath:)) that derives a field Binding from the segment Binding using a
WritableKeyPath; replace the inline Binding(get:set:) calls in cameraSection
(and similarly in cursor/keystroke sections) with calls to that helper so you
can bind startTime, endTime, startTransform, endTransform and then further
access zoom via key paths or map TransformValue, keeping references to
cameraSegmentBinding, timeRangeFields, TransformValue, startTransform,
endTransform, startTime and endTime to locate the usage sites.
- Around line 254-278: cameraSegmentBinding currently captures trackIndex which
can become stale if timeline.tracks mutates; change the get/set closures to
re-resolve the transform track each time by searching timeline.tracks for
trackType == .transform (e.g., using firstIndex(where:) or pattern-matching to
find the .camera case) and then locate the segment by id inside those closures
instead of using the outer trackIndex; ensure set replaces the correct track by
reconstructing the .camera(track) enum case when updating and call
onSegmentChange(), and apply the same pattern to the cursor/keystroke binding
helpers.
- Around line 234-252: timeRangeFields currently lets users enter arbitrary
Double values; add immediate input validation and clamping inside
timeRangeFields so the start and end bindings are constrained to [0,
mediaDuration] and to each other (start <= end). Modify the view to accept or
capture the media duration (e.g., add a duration parameter or reference the
ViewModel) and add onChange handlers to the two TextFields that clamp the
changed value with min/max (0 and duration) and enforce ordering by adjusting
the other binding if needed; ensure these handlers call the existing
updateSegmentTimeRange/updateSegment (or the ViewModel method used to persist
changes) after clamping so the model stays consistent.
In `@Screenize/Views/Timeline/TimelineView.swift`:
- Around line 382-398: The .onTapGesture on the segment view conflicts with the
move gesture (moveGesture) because the DragGesture uses minimumDistance: 0 and
triggers onChanged/onEnded, causing duplicate selection calls to onSegmentSelect
and setting selectedSegmentID/selectedSegmentTrackType; fix by either removing
the .onTapGesture block entirely (rely on moveGesture's onEnded to call
onSegmentSelect and set selectedSegmentID/selectedSegmentTrackType) or modify
the DragGesture inside moveGesture to use a small minimumDistance (e.g., 2) so
simple taps are not interpreted as drags, and ensure only one code path updates
onSegmentSelect, selectedSegmentID and selectedSegmentTrackType.
- Around line 258-307: The switch in segmentBlocks(for:) duplicates the same
mapping/ForEach for camera/cursor/keystroke; extract a generic helper to avoid
repetition. Define a small protocol (e.g., SegmentRepresentable) exposing id,
startTime, endTime and make the camera/cursor/keystroke segment types conform,
then implement a helper method like renderSegmentBlocks<S: Collection &
...>(segments: S, trackType: TrackType) that builds the [SegmentRange] from
segments, computes snapTargets(from:excluding:) and
editBounds(from:excluding:currentStart:currentEnd:) and invokes
segmentBlock(...) inside a ForEach; finally call that helper from
segmentBlocks(for:) for each case (passing cameraTrack.segments,
cursorTrack.segments, keystrokeTrack.segments and the corresponding trackType)
so the mapping logic is centralized and duplicated code removed.
- Around line 640-664: Replace the fragile push/pop stack logic in
updateResizeCursor(_:) and resetResizeCursorIfNeeded() by setting the cursor
directly on enter/exit: on hover enter call NSCursor.resizeLeftRight.set(), on
hover exit call NSCursor.arrow.set(), and remove the resizeCursorHoverCount and
any NSCursor.push()/pop() usage; update any callers of updateResizeCursor and
onDisappear cleanup to stop relying on resetResizeCursorIfNeeded so the cursor
state is deterministic even if hover events are mismatched.
- Around line 309-408: segmentBlock is too long; extract the repeated left/right
handle UI into a small helper (e.g., SegmentHandle view or
makeSegmentHandle(...) function) that encapsulates the Rectangle styling,
.contentShape, .onHover { updateResizeCursor(...) } and the
highPriorityGesture(resizeGesture(...)) call (pass id, trackType, start, end,
mode: .resizeStart/.resizeEnd, snapTargets, editBounds, and read
selectedSegmentID for styling). Replace the two inline Rectangle handle blocks
in segmentBlock with SegmentHandle(...) instances and keep segmentBlock
responsible only for layout, calling moveGesture(...) and onSegmentSelect as it
does now; this will reduce function_body_length and remove duplication while
preserving resizeGesture, moveGesture, onSegmentSelect, selectedSegmentID, and
updateResizeCursor usage.
| - All generated logs/summaries/tags for vector DB workflows must be in English. | ||
|
|
||
| ## Build, Lint, and Test Commands | ||
| Run all commands from repository root (`/Users/sung-yein/Workspace/screenize`). |
There was a problem hiding this comment.
Hardcoded local path should be generic.
/Users/sung-yein/Workspace/screenize is a developer-specific absolute path. Agents running in different environments (CI, other developers, sandboxed) won't find this path.
📝 Suggested fix
-Run all commands from repository root (`/Users/sung-yein/Workspace/screenize`).
+Run all commands from the repository root.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Run all commands from repository root (`/Users/sung-yein/Workspace/screenize`). | |
| Run all commands from the repository root. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` at line 23, Replace the developer-specific absolute path
`/Users/sung-yein/Workspace/screenize` in AGENTS.md with a generic
repository-root reference (e.g., "repository root", `$(pwd)`, or `$(git
rev-parse --show-toplevel)`) or a placeholder like `<repo-root>` so instructions
work across machines and CI; update the sentence "Run all commands from
repository root (`/Users/sung-yein/Workspace/screenize`)." to use the chosen
generic form.
| ### Naming Conventions | ||
| - Use descriptive Swift names in standard Apple style. | ||
| - Use `Manager` / `Coordinator` suffixes for orchestration components. | ||
| - Keep keyframe/timeline terms consistent with domain model naming. |
There was a problem hiding this comment.
AGENTS.md references terminology and features that this PR removes.
Three items are now stale given this PR's changes:
- Line 111: mentions "keyframe/timeline terms" — the domain model is migrating to "segment"-based naming.
- Line 123: instructs "Keep timeline keyframes sorted by time within tracks" — should reference segments.
- Line 129: says "Respect existing migration path for legacy
.fsprojhandling" — but this PR removes.fsprojsupport entirely.
Since this file is being added in the same PR, it should reflect the post-migration state.
Also applies to: 123-123, 129-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` at line 111, Update AGENTS.md to reflect the post-migration naming
and removed legacy support: replace the phrase "keyframe/timeline terms" with
"segment-based naming" (or "segments"), change "Keep timeline keyframes sorted
by time within tracks" to "Keep segments sorted by start time within tracks" (or
equivalent segment-sorting guidance), and remove or replace "Respect existing
migration path for legacy `.fsproj` handling" with a note that `.fsproj` support
has been removed and no migration path is provided; ensure the three phrases
mentioned are edited to use "segment" terminology and the `.fsproj` sentence is
deleted or rewritten accordingly.
| guard project.version == 5 else { | ||
| throw PackageManagerError.unsupportedProjectVersion(project.version) | ||
| } |
There was a problem hiding this comment.
Hard version pin rejects all prior versions — confirm this is intentional.
project.version == 5 will reject version 4 (and earlier) packages with no migration path. Given the PR removes the legacy migrator, users with existing v4 .screenize packages will see "Unsupported project version" with no recourse other than re-creating from scratch.
If v4→v5 migration is intentionally unsupported, consider improving the error message to be more specific (e.g., mentioning which versions are supported, or linking to migration instructions).
Suggested error message improvement
case .unsupportedProjectVersion(let version):
- return "Unsupported project version: \(version). Please create a new project."
+ return "Unsupported project version \(version) (expected 5). Please re-record or create a new project."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Project/PackageManager.swift` around lines 217 - 219, The guard
currently hard-fails on any project.version != 5 (guard project.version == 5)
which will block v4 users with no migration path; update the check/throw to
either (A) accept older versions and invoke a migrator if you intend to support
v4→v5 (e.g., adjust the condition and call the migrator function), or (B) if v4
is intentionally unsupported, replace the thrown
PackageManagerError.unsupportedProjectVersion(project.version) with a more
descriptive error that lists supported versions (e.g., "supported: 5") and
provides migration guidance or a link, using the same symbols (the guard in
PackageManager.swift and PackageManagerError.unsupportedProjectVersion) so
callers see actionable instructions instead of a generic unsupported-version
message.
| let duration = max(0.001, segment.endTime - segment.startTime) | ||
| let rawProgress = CGFloat((time - segment.startTime) / duration) | ||
| let progress = segment.interpolation.apply(rawProgress, duration: CGFloat(duration)) |
There was a problem hiding this comment.
rawProgress is not clamped to [0, 1].
While activeSegment(at:) constrains the time range, floating-point edge cases could produce a rawProgress marginally outside [0, 1]. If EasingCurve.apply doesn't handle out-of-range input, this could produce unexpected interpolation artifacts at segment boundaries.
Defensive clamp
let duration = max(0.001, segment.endTime - segment.startTime)
- let rawProgress = CGFloat((time - segment.startTime) / duration)
+ let rawProgress = min(1, max(0, CGFloat((time - segment.startTime) / duration)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let duration = max(0.001, segment.endTime - segment.startTime) | |
| let rawProgress = CGFloat((time - segment.startTime) / duration) | |
| let progress = segment.interpolation.apply(rawProgress, duration: CGFloat(duration)) | |
| let duration = max(0.001, segment.endTime - segment.startTime) | |
| let rawProgress = min(1, max(0, CGFloat((time - segment.startTime) / duration))) | |
| let progress = segment.interpolation.apply(rawProgress, duration: CGFloat(duration)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Render/FrameEvaluator.swift` around lines 88 - 90, Clamp
rawProgress to the [0, 1] range before passing it to segment.interpolation.apply
to avoid floating-point edge cases producing values slightly outside the valid
range; locate the computation of rawProgress in FrameEvaluator (where duration
is computed and rawProgress is defined) and replace the direct rawProgress usage
with a clamped value (e.g., using min(max(rawProgress, 0), 1)) so progress =
segment.interpolation.apply(clampedRawProgress, duration: CGFloat(duration)).
| for click in sortedClicks { | ||
| switch click.clickType { | ||
| case .leftDown, .rightDown: | ||
| pendingDownByType[click.clickType] = click | ||
|
|
||
| case .leftUp, .rightUp: | ||
| let downType: ClickEventData.ClickType = (click.clickType == .leftUp) ? .leftDown : .rightDown | ||
| guard let down = pendingDownByType[downType] else { continue } | ||
|
|
||
| let clickType: ClickType = (downType == .leftDown) ? .left : .right | ||
| let duration = max(0.03, click.time - down.time) | ||
| clicks.append(RenderClickEvent( | ||
| timestamp: down.time, | ||
| duration: duration, | ||
| x: down.position.x, | ||
| y: down.position.y, | ||
| clickType: clickType | ||
| )) | ||
| pendingDownByType.removeValue(forKey: downType) | ||
|
|
||
| case .doubleClick: | ||
| clicks.append(RenderClickEvent( | ||
| timestamp: click.time, | ||
| duration: 0.1, | ||
| x: click.position.x, | ||
| y: click.position.y, | ||
| clickType: .left | ||
| )) | ||
| } |
There was a problem hiding this comment.
Consecutive down events of the same type silently overwrite — first click is lost.
On line 29, if two leftDown events arrive before a leftUp, the first down is overwritten in pendingDownByType and no RenderClickEvent is ever emitted for it. This could happen if an up event was missed during recording.
Consider flushing the existing pending down before storing the new one:
Proposed fix
case .leftDown, .rightDown:
+ // Flush any existing unmatched down of the same type
+ if let existing = pendingDownByType[click.clickType] {
+ let clickType: ClickType = (existing.clickType == .leftDown) ? .left : .right
+ clicks.append(RenderClickEvent(
+ timestamp: existing.time,
+ duration: 0.1,
+ x: existing.position.x,
+ y: existing.position.y,
+ clickType: clickType
+ ))
+ }
pendingDownByType[click.clickType] = click📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for click in sortedClicks { | |
| switch click.clickType { | |
| case .leftDown, .rightDown: | |
| pendingDownByType[click.clickType] = click | |
| case .leftUp, .rightUp: | |
| let downType: ClickEventData.ClickType = (click.clickType == .leftUp) ? .leftDown : .rightDown | |
| guard let down = pendingDownByType[downType] else { continue } | |
| let clickType: ClickType = (downType == .leftDown) ? .left : .right | |
| let duration = max(0.03, click.time - down.time) | |
| clicks.append(RenderClickEvent( | |
| timestamp: down.time, | |
| duration: duration, | |
| x: down.position.x, | |
| y: down.position.y, | |
| clickType: clickType | |
| )) | |
| pendingDownByType.removeValue(forKey: downType) | |
| case .doubleClick: | |
| clicks.append(RenderClickEvent( | |
| timestamp: click.time, | |
| duration: 0.1, | |
| x: click.position.x, | |
| y: click.position.y, | |
| clickType: .left | |
| )) | |
| } | |
| for click in sortedClicks { | |
| switch click.clickType { | |
| case .leftDown, .rightDown: | |
| // Flush any existing unmatched down of the same type | |
| if let existing = pendingDownByType[click.clickType] { | |
| let clickType: ClickType = (existing.clickType == .leftDown) ? .left : .right | |
| clicks.append(RenderClickEvent( | |
| timestamp: existing.time, | |
| duration: 0.1, | |
| x: existing.position.x, | |
| y: existing.position.y, | |
| clickType: clickType | |
| )) | |
| } | |
| pendingDownByType[click.clickType] = click | |
| case .leftUp, .rightUp: | |
| let downType: ClickEventData.ClickType = (click.clickType == .leftUp) ? .leftDown : .rightDown | |
| guard let down = pendingDownByType[downType] else { continue } | |
| let clickType: ClickType = (downType == .leftDown) ? .left : .right | |
| let duration = max(0.03, click.time - down.time) | |
| clicks.append(RenderClickEvent( | |
| timestamp: down.time, | |
| duration: duration, | |
| x: down.position.x, | |
| y: down.position.y, | |
| clickType: clickType | |
| )) | |
| pendingDownByType.removeValue(forKey: downType) | |
| case .doubleClick: | |
| clicks.append(RenderClickEvent( | |
| timestamp: click.time, | |
| duration: 0.1, | |
| x: click.position.x, | |
| y: click.position.y, | |
| clickType: .left | |
| )) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Render/MouseDataConverter.swift` around lines 26 - 54, The loop
handling sortedClicks currently overwrites an existing pending down in
pendingDownByType when a second .leftDown/.rightDown arrives, losing the first
click; modify the branch that handles .leftDown and .rightDown (in the for loop
over sortedClicks in MouseDataConverter) to detect an existing pending down for
that ClickEventData.ClickType, emit a RenderClickEvent for the previous down
with a safe minimal duration/coordinates (mirror how .leftUp/.rightUp builds
RenderClickEvent), then replace the pendingDownByType entry with the new down so
the latest is stored; ensure you use the same ClickType mapping (leftDown→.left,
rightDown→.right) and remove/replace the pending entry consistently.
| private func addTransformSegment(at time: TimeInterval) { | ||
| guard let trackIndex = project.timeline.tracks.firstIndex(where: { $0.trackType == .transform }) else { | ||
| return | ||
| } | ||
|
|
||
| guard case .transform(var track) = project.timeline.tracks[trackIndex] else { | ||
| return | ||
| } | ||
|
|
||
| // Create a keyframe based on the current state | ||
| let newKeyframe = TransformKeyframe( | ||
| time: time, | ||
| zoom: 1.0, | ||
| centerX: 0.5, | ||
| centerY: 0.5 | ||
| ) | ||
|
|
||
| track.keyframes.append(newKeyframe) | ||
| track.keyframes.sort { $0.time < $1.time } | ||
|
|
||
| project.timeline.tracks[trackIndex] = .transform(track) | ||
| selectedKeyframeID = newKeyframe.id | ||
| selectedTrackType = .transform | ||
| } | ||
|
|
||
| private func addRippleKeyframe(at time: TimeInterval) { | ||
| guard let trackIndex = project.timeline.tracks.firstIndex(where: { $0.trackType == .ripple }) else { | ||
| return | ||
| } | ||
|
|
||
| guard case .ripple(var track) = project.timeline.tracks[trackIndex] else { | ||
| guard case .camera(var track) = project.timeline.tracks[trackIndex] else { | ||
| return | ||
| } | ||
|
|
||
| let newKeyframe = RippleKeyframe( | ||
| time: time, | ||
| x: 0.5, | ||
| y: 0.5 | ||
| let endTime = min(duration, time + 1.0) | ||
| let newSegment = CameraSegment( | ||
| startTime: time, | ||
| endTime: max(time + 0.05, endTime), | ||
| startTransform: .identity, | ||
| endTransform: .identity | ||
| ) | ||
|
|
||
| track.keyframes.append(newKeyframe) | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| _ = track.addSegment(newSegment) | ||
|
|
||
| project.timeline.tracks[trackIndex] = .ripple(track) | ||
| selectedKeyframeID = newKeyframe.id | ||
| selectedTrackType = .ripple | ||
| project.timeline.tracks[trackIndex] = .camera(track) | ||
| selectedSegmentID = newSegment.id | ||
| selectedSegmentTrackType = .transform | ||
| } |
There was a problem hiding this comment.
Segment endTime can exceed duration when time is near or at the end.
When time >= duration (e.g., playhead is at the very end), endTime is clamped to duration, but then max(time + 0.05, endTime) pushes it past duration:
endTime = min(duration, time + 1.0)→durationmax(time + 0.05, duration)→duration + 0.05
The other addCursorSegment and addKeystrokeSegment methods have the inverse problem: startTime == endTime producing a zero-length segment when time == duration.
Consider clamping time at the start and ensuring endTime never exceeds duration:
Proposed fix for addTransformSegment
private func addTransformSegment(at time: TimeInterval) {
guard let trackIndex = project.timeline.tracks.firstIndex(where: { $0.trackType == .transform }) else {
return
}
guard case .camera(var track) = project.timeline.tracks[trackIndex] else {
return
}
- let endTime = min(duration, time + 1.0)
+ let clampedTime = min(time, max(0, duration - 0.05))
+ let endTime = min(duration, clampedTime + 1.0)
let newSegment = CameraSegment(
- startTime: time,
- endTime: max(time + 0.05, endTime),
+ startTime: clampedTime,
+ endTime: max(clampedTime + 0.05, endTime),
startTransform: .identity,
endTransform: .identity
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/ViewModels/EditorViewModel.swift` around lines 506 - 528, In
addTransformSegment, clamp the start time and ensure the segment end never
exceeds duration: compute a clampedStart = min(time, max(0, duration - 0.05))
(so startTime never sits at or past duration), compute candidateEnd =
min(duration, clampedStart + 1.0), then set endTime = max(clampedStart + 0.05,
candidateEnd) (or simply endTime = min(duration, max(clampedStart + 0.05,
candidateEnd))) before creating CameraSegment(startTime: clampedStart, endTime:
endTime). Update references to startTime/endTime used in CameraSegment creation
and keep the rest of the logic (track lookup, track.addSegment,
selectedSegmentID, selectedSegmentTrackType) unchanged.
| let endTime = min(duration, time + 1.0) | ||
| let newSegment = CameraSegment( | ||
| startTime: time, | ||
| endTime: max(time + 0.05, endTime), | ||
| startTransform: .identity, | ||
| endTransform: .identity | ||
| ) | ||
|
|
||
| track.keyframes.append(newKeyframe) | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| _ = track.addSegment(newSegment) |
There was a problem hiding this comment.
Silent discard of addSegment return value when overlap exists.
CameraTrack.addSegment and CursorTrackV2.addSegment return Bool to indicate whether the segment was added (per Track.swift:469-474 and 516-521). Here the result is discarded with _ = (line 523) or just _ = (line 547). If the add fails due to overlap, the user sees no feedback, yet selectedSegmentID is set to the non-existent segment's ID (lines 526–527, 550–551), leaving the selection in an invalid state.
Consider guarding on the result and skipping the selection update on failure.
Proposed fix (camera example; apply same pattern to cursor)
- _ = track.addSegment(newSegment)
-
- project.timeline.tracks[trackIndex] = .camera(track)
- selectedSegmentID = newSegment.id
- selectedSegmentTrackType = .transform
+ guard track.addSegment(newSegment) else { return }
+
+ project.timeline.tracks[trackIndex] = .camera(track)
+ selectedSegmentID = newSegment.id
+ selectedSegmentTrackType = .transformAlso applies to: 539-547
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/ViewModels/EditorViewModel.swift` around lines 515 - 523, The code
currently discards the Bool result of track.addSegment when creating
CameraSegment/Cursor segments, then unconditionally sets selectedSegmentID to
the new segment's id, which can leave selection pointing at a non-existent
segment if addSegment returns false due to overlap; update the logic in the
EditorViewModel methods that create segments (the CameraSegment and Cursor
segment creation blocks that call track.addSegment) to check the returned Bool
and only assign selectedSegmentID and perform further selection-related work
when addSegment returns true, otherwise skip the selection update and optionally
early-return or surface an error; specifically guard the result of
CameraTrack.addSegment/ CursorTrackV2.addSegment before setting
selectedSegmentID (and before any code that assumes the segment was added).
| /// Update segment start/end time as a single edit operation. | ||
| @discardableResult | ||
| func updateSegmentTimeRange(_ id: UUID, startTime: TimeInterval, endTime: TimeInterval) -> Bool { | ||
| let clampedStart = max(0, min(duration, startTime)) | ||
| let clampedEnd = min(duration, max(clampedStart + 0.001, endTime)) | ||
|
|
||
| for (trackIndex, anyTrack) in project.timeline.tracks.enumerated() { | ||
| switch anyTrack { | ||
| case .transform(var track): | ||
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| track.keyframes[keyframeIndex].time = newTime | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| project.timeline.tracks[trackIndex] = .transform(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return | ||
| case .camera(var track): | ||
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | ||
| continue | ||
| } | ||
|
|
||
| case .ripple(var track): | ||
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| track.keyframes[keyframeIndex].time = newTime | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| project.timeline.tracks[trackIndex] = .ripple(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return | ||
| saveUndoSnapshot() | ||
| var segment = track.segments[index] | ||
| segment.startTime = clampedStart | ||
| segment.endTime = clampedEnd | ||
| guard track.updateSegment(segment) else { | ||
| return false | ||
| } | ||
|
|
||
| project.timeline.tracks[trackIndex] = .camera(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return true | ||
|
|
||
| case .cursor(var track): | ||
| if var keyframes = track.styleKeyframes, | ||
| let keyframeIndex = keyframes.firstIndex(where: { $0.id == id }) { | ||
| keyframes[keyframeIndex].time = newTime | ||
| keyframes.sort { $0.time < $1.time } | ||
| track = CursorTrack( | ||
| id: track.id, | ||
| name: track.name, | ||
| isEnabled: track.isEnabled, | ||
| defaultStyle: track.defaultStyle, | ||
| defaultScale: track.defaultScale, | ||
| defaultVisible: track.defaultVisible, | ||
| styleKeyframes: keyframes | ||
| ) | ||
| project.timeline.tracks[trackIndex] = .cursor(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return | ||
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | ||
| continue | ||
| } | ||
|
|
||
| saveUndoSnapshot() | ||
| var segment = track.segments[index] | ||
| segment.startTime = clampedStart | ||
| segment.endTime = clampedEnd | ||
| guard track.updateSegment(segment) else { | ||
| return false | ||
| } | ||
|
|
||
| project.timeline.tracks[trackIndex] = .cursor(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return true | ||
|
|
||
| case .keystroke(var track): | ||
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| track.keyframes[keyframeIndex].time = newTime | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| project.timeline.tracks[trackIndex] = .keystroke(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return | ||
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | ||
| continue | ||
| } | ||
|
|
||
| saveUndoSnapshot() | ||
| var segment = track.segments[index] | ||
| segment.startTime = clampedStart | ||
| segment.endTime = clampedEnd | ||
| guard track.updateSegment(segment) else { | ||
| return false | ||
| } | ||
|
|
||
| project.timeline.tracks[trackIndex] = .keystroke(track) | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| return true | ||
| } | ||
| } | ||
|
|
||
| return false | ||
| } |
There was a problem hiding this comment.
updateSegmentTimeRange pushes an undo snapshot on each call, even if the update is rejected.
saveUndoSnapshot() is called (lines 652, 670, 688) before track.updateSegment(segment) is verified. If updateSegment returns false (overlap), an undo snapshot has already been pushed for a no-op change. This pollutes the undo stack.
Move snapshot after validation
case .camera(var track):
guard let index = track.segments.firstIndex(where: { $0.id == id }) else {
continue
}
- saveUndoSnapshot()
var segment = track.segments[index]
segment.startTime = clampedStart
segment.endTime = clampedEnd
guard track.updateSegment(segment) else {
return false
}
+ saveUndoSnapshot()
project.timeline.tracks[trackIndex] = .camera(track)Wait — saveUndoSnapshot must capture state before mutation. Since track is a value-type copy, the project hasn't been mutated yet. The current placement is actually acceptable from a data perspective, but only if we don't return false after pushing. Consider wrapping the snapshot push behind a successful update check for each branch.
Actually, re-examining: track is a local var copy (value semantics), so saveUndoSnapshot() captures the pre-mutation state correctly. But if updateSegment fails, the snapshot is pushed without a corresponding change — undo would then replay a no-op. A cleaner approach:
Revised fix
case .camera(var track):
guard let index = track.segments.firstIndex(where: { $0.id == id }) else {
continue
}
var segment = track.segments[index]
segment.startTime = clampedStart
segment.endTime = clampedEnd
- saveUndoSnapshot()
guard track.updateSegment(segment) else {
return false
}
+ saveUndoSnapshot()
project.timeline.tracks[trackIndex] = .camera(track)Apply the same reordering for the .cursor and .keystroke branches.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Update segment start/end time as a single edit operation. | |
| @discardableResult | |
| func updateSegmentTimeRange(_ id: UUID, startTime: TimeInterval, endTime: TimeInterval) -> Bool { | |
| let clampedStart = max(0, min(duration, startTime)) | |
| let clampedEnd = min(duration, max(clampedStart + 0.001, endTime)) | |
| for (trackIndex, anyTrack) in project.timeline.tracks.enumerated() { | |
| switch anyTrack { | |
| case .transform(var track): | |
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | |
| track.keyframes[keyframeIndex].time = newTime | |
| track.keyframes.sort { $0.time < $1.time } | |
| project.timeline.tracks[trackIndex] = .transform(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return | |
| case .camera(var track): | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| case .ripple(var track): | |
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | |
| track.keyframes[keyframeIndex].time = newTime | |
| track.keyframes.sort { $0.time < $1.time } | |
| project.timeline.tracks[trackIndex] = .ripple(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return | |
| saveUndoSnapshot() | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| project.timeline.tracks[trackIndex] = .camera(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| case .cursor(var track): | |
| if var keyframes = track.styleKeyframes, | |
| let keyframeIndex = keyframes.firstIndex(where: { $0.id == id }) { | |
| keyframes[keyframeIndex].time = newTime | |
| keyframes.sort { $0.time < $1.time } | |
| track = CursorTrack( | |
| id: track.id, | |
| name: track.name, | |
| isEnabled: track.isEnabled, | |
| defaultStyle: track.defaultStyle, | |
| defaultScale: track.defaultScale, | |
| defaultVisible: track.defaultVisible, | |
| styleKeyframes: keyframes | |
| ) | |
| project.timeline.tracks[trackIndex] = .cursor(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| saveUndoSnapshot() | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| project.timeline.tracks[trackIndex] = .cursor(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| case .keystroke(var track): | |
| if let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | |
| track.keyframes[keyframeIndex].time = newTime | |
| track.keyframes.sort { $0.time < $1.time } | |
| project.timeline.tracks[trackIndex] = .keystroke(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| saveUndoSnapshot() | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| project.timeline.tracks[trackIndex] = .keystroke(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| } | |
| } | |
| return false | |
| } | |
| /// Update segment start/end time as a single edit operation. | |
| `@discardableResult` | |
| func updateSegmentTimeRange(_ id: UUID, startTime: TimeInterval, endTime: TimeInterval) -> Bool { | |
| let clampedStart = max(0, min(duration, startTime)) | |
| let clampedEnd = min(duration, max(clampedStart + 0.001, endTime)) | |
| for (trackIndex, anyTrack) in project.timeline.tracks.enumerated() { | |
| switch anyTrack { | |
| case .camera(var track): | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| saveUndoSnapshot() | |
| project.timeline.tracks[trackIndex] = .camera(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| case .cursor(var track): | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| saveUndoSnapshot() | |
| project.timeline.tracks[trackIndex] = .cursor(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| case .keystroke(var track): | |
| guard let index = track.segments.firstIndex(where: { $0.id == id }) else { | |
| continue | |
| } | |
| var segment = track.segments[index] | |
| segment.startTime = clampedStart | |
| segment.endTime = clampedEnd | |
| guard track.updateSegment(segment) else { | |
| return false | |
| } | |
| saveUndoSnapshot() | |
| project.timeline.tracks[trackIndex] = .keystroke(track) | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| return true | |
| } | |
| } | |
| return false | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/ViewModels/EditorViewModel.swift` around lines 639 - 704, The undo
snapshot is currently pushed before validating the update (saveUndoSnapshot() is
called before track.updateSegment(segment)), which creates a no-op undo if
updateSegment returns false; move the saveUndoSnapshot() call so it runs only
after track.updateSegment(segment) returns true (i.e., call saveUndoSnapshot()
immediately after the guard that verifies track.updateSegment(segment) and
before assigning project.timeline.tracks[trackIndex] =
.camera/.cursor/.keystroke(track) and setting
hasUnsavedChanges/invalidatePreviewCache), and apply this same reorder in the
.camera, .cursor and .keystroke branches of updateSegmentTimeRange.
| DragGesture(minimumDistance: 0) | ||
| .onChanged { value in | ||
| isDragging = true | ||
| x = max(0, min(1, value.location.x / size.width)) | ||
| y = max(0, min(1, value.location.y / size.height)) | ||
| onChange?() | ||
| } | ||
| .onEnded { _ in | ||
| isDragging = false | ||
| } | ||
| ) |
There was a problem hiding this comment.
Guard against zero-size geometry in the drag gesture.
If size.width or size.height is zero (e.g. during initial layout or zero-size frame), the division on lines 45–46 would produce NaN/inf, corrupting the bound values.
🛡️ Proposed guard
.gesture(
DragGesture(minimumDistance: 0)
.onChanged { value in
+ guard size.width > 0, size.height > 0 else { return }
isDragging = true
x = max(0, min(1, value.location.x / size.width))
y = max(0, min(1, value.location.y / size.height))
onChange?()
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DragGesture(minimumDistance: 0) | |
| .onChanged { value in | |
| isDragging = true | |
| x = max(0, min(1, value.location.x / size.width)) | |
| y = max(0, min(1, value.location.y / size.height)) | |
| onChange?() | |
| } | |
| .onEnded { _ in | |
| isDragging = false | |
| } | |
| ) | |
| DragGesture(minimumDistance: 0) | |
| .onChanged { value in | |
| guard size.width > 0, size.height > 0 else { return } | |
| isDragging = true | |
| x = max(0, min(1, value.location.x / size.width)) | |
| y = max(0, min(1, value.location.y / size.height)) | |
| onChange?() | |
| } | |
| .onEnded { _ in | |
| isDragging = false | |
| } | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Views/Inspector/PositionPicker.swift` around lines 42 - 52, Guard
against zero-size geometry in the DragGesture handlers: before computing x and y
in the .onChanged closure of the DragGesture, check that size.width and
size.height are > 0 and only perform value.location.x / size.width and
value.location.y / size.height when valid; otherwise skip updating (or set to a
safe default like 0) to avoid producing NaN/inf. Update the .onChanged block
around the DragGesture (where isDragging, x, y, and onChange are used) to
perform these checks and still call onChange?() only when a real update to x/y
occurs.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
Screenize/Models/MouseEvent.swift (1)
322-358:⚠️ Potential issue | 🟠 MajorSynthesized
Codablewill break decoding of any previously savedMouseRecordingJSON lacking the newer fields.Swift's synthesized
Decodabledoes not use default values from the memberwiseinit(). The fieldsscrollEvents,keyboardEvents,dragEvents, anduiStateSampleshave defaults ininit(), but the synthesized decoder requires all stored properties to be present in the JSON. If any previously serializedMouseRecordingfiles (from before these fields were added) still need to be loaded, theload()function will throw a decoding error.Either confirm that legacy
MouseRecordingdata is intentionally discarded, or implement a custominit(from:)usingdecodeIfPresent()with fallbacks for these optional-at-save-time fields to maintain backward compatibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Models/MouseEvent.swift` around lines 322 - 358, The synthesized Decodable for struct MouseRecording will fail to decode legacy JSON missing the new properties (scrollEvents, keyboardEvents, dragEvents, uiStateSamples and possibly frameRate/scaleFactor/createdAt); implement a custom init(from decoder: Decoder) in MouseRecording that uses container.decodeIfPresent(...) for those keys and supplies sensible defaults (empty arrays, default frameRate 60, scaleFactor 1.0, createdAt = Date() or decodeIfPresent with fallback) so existing load() calls can decode older files without throwing; locate MouseRecording and replace the synthesized decoding by adding this init(from:) and keep the existing memberwise init for creation.Screenize/Views/Inspector/TransformInspector.swift (1)
156-178:⚠️ Potential issue | 🟡 MinorSlider changes don't fire
onChange?(), unlike the TextField setters.The X/Y
Sliderbindings (Lines 156-159, 175-178) updatekeyframe.centerbut never callonChange?(). The correspondingTextFieldbindings on Lines 163 and 182 do call it. This means dragging the sliders won't notify the parent of changes, potentially skipping cache invalidation or other side effects.Proposed fix
Slider(value: Binding( get: { keyframe.center.x }, - set: { keyframe.center = NormalizedPoint(x: $0, y: keyframe.center.y) } + set: { keyframe.center = NormalizedPoint(x: $0, y: keyframe.center.y); onChange?() } ), in: 0...1)Slider(value: Binding( get: { keyframe.center.y }, - set: { keyframe.center = NormalizedPoint(x: keyframe.center.x, y: $0) } + set: { keyframe.center = NormalizedPoint(x: keyframe.center.x, y: $0); onChange?() } ), in: 0...1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/TransformInspector.swift` around lines 156 - 178, The Slider bindings for keyframe.center are missing the onChange callback; update the set closures for both X and Y Slider bindings in TransformInspector.swift so after assigning keyframe.center = NormalizedPoint(...) they also call onChange?(), mirroring the TextField setters (i.e., modify the set blocks that currently set keyframe.center to also invoke onChange?() so slider drags notify the parent).Screenize/ScreenizeApp.swift (1)
260-282:⚠️ Potential issue | 🟡 MinorPackage created in the source video's directory — may fail on read-only volumes.
createPackageFromVideois called withparentDirectory = url.deletingLastPathComponent(). If the user opens a video from a read-only location (mounted DMG, network share, etc.), thecreateDirectorycall inside the package manager will throw. The catch block only prints the error with no user-facing message.Consider prompting for a writable save location or defaulting to a known writable directory (e.g.,
~/Documents).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/ScreenizeApp.swift` around lines 260 - 282, openVideo currently attempts to create the .screenize package in the source video's parent directory by calling PackageManager.shared.createPackageFromVideo which will fail on read-only volumes; update openVideo to first check whether parentDirectory is writable and if not prompt the user with a save location (e.g., show a NSSavePanel/NSSavePanel-like dialog) or default to a known writable directory such as FileManager.default.urls(for: .documentDirectory, .userDomainMask).first, then pass that writable URL into createPackageFromVideo; also replace the simple print in the catch with a user-facing alert/notification (so failures from ProjectCreator.createFromVideo and projectManager.save surface to the user) and still log the error for debugging.
🧹 Nitpick comments (18)
Screenize/Generators/SmartZoom/SessionCenterResolver.swift (2)
262-262: Minor: inconsistent indentation on Line 262.The comment
// Check the time intervalis indented with extra leading spaces compared to the surrounding code in theforloop body.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Generators/SmartZoom/SessionCenterResolver.swift` at line 262, The comment "// Check the time interval" in the for loop inside SessionCenterResolver.swift is over-indented compared to surrounding lines; locate the for loop in the SessionCenterResolver (e.g., inside the method where sessions are iterated) and left-align that comment to the same indentation level as the adjacent statements in the loop body so it matches the file's indentation style.
200-200: RedundantNormalizedPointre-wrapping — pass the existing value directly.
lastMoveCenter,newCenter, andcurrentCenterare alreadyNormalizedPointvalues, so deconstructing and re-wrapping them is unnecessary. This appears in all four changed lines.♻️ Simplified diff
- center: NormalizedPoint(x: lastMoveCenter.x, y: lastMoveCenter.y), + center: lastMoveCenter,- center: NormalizedPoint(x: newCenter.x, y: newCenter.y), + center: newCenter,- center: NormalizedPoint(x: currentCenter.x, y: currentCenter.y), + center: currentCenter,- center: NormalizedPoint(x: newCenter.x, y: newCenter.y), + center: newCenter,Also applies to: 208-208, 279-279, 288-288
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Generators/SmartZoom/SessionCenterResolver.swift` at line 200, The code is re-wrapping NormalizedPoint instances unnecessarily; in SessionCenterResolver (variables lastMoveCenter, newCenter, currentCenter) stop constructing new NormalizedPoint(x: ..., y: ...) and pass the existing NormalizedPoint variables directly (e.g., use center: lastMoveCenter instead of center: NormalizedPoint(x: lastMoveCenter.x, y: lastMoveCenter.y)); update the four occurrences referenced (around the center assignments) to use the variables themselves to remove redundant deconstruction/re-wrapping.Screenize/Generators/SmartZoomGenerator.swift (1)
174-179: RedundantNormalizedPointreconstruction from an existingNormalizedPoint.Several locations construct
NormalizedPoint(x: existingPoint.x, y: existingPoint.y)when the source is already aNormalizedPoint. You can pass it directly:- center: NormalizedPoint(x: sessionCenter.x, y: sessionCenter.y), + center: sessionCenter,Same applies on lines 228, 238, and 279 with
effectiveCenter/nextCenter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Generators/SmartZoomGenerator.swift` around lines 174 - 179, The code is redundantly reconstructing NormalizedPoint instances (e.g., NormalizedPoint(x: sessionCenter.x, y: sessionCenter.y)) when you already have a NormalizedPoint; update the TransformKeyframe constructions to pass the existing NormalizedPoint directly (use sessionCenter, effectiveCenter, nextCenter as the center argument) in the SmartZoomGenerator code paths that build keyframes (the TransformKeyframe initializers around the zoomInEndTime and the places noted with effectiveCenter/nextCenter) so you stop creating unnecessary copies.Screenize/Models/MouseEvent.swift (1)
360-360: Stale// MARK: - Codablecomment after removing custom Codable implementation.This marker no longer has any associated code beneath it. Consider removing it or moving it to annotate the synthesized conformance on the struct declaration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Models/MouseEvent.swift` at line 360, Remove the stale comment "// MARK: - Codable" that remains after you removed the custom Codable implementation in MouseEvent.swift; either delete the marker entirely or move a brief "Codable"/comment annotation up to the MouseEvent struct declaration to indicate synthesized conformance (refer to the MouseEvent type) so there is no orphaned section header with no code beneath it.Screenize/Render/MousePositionInterpolator.swift (2)
86-118: Gaussian kernel is pre-normalized —weightSumdivision is redundant (harmless).The kernel from
gaussianKernelis already normalized (line 131), soweightSumon lines 99/106 will always equal 1.0 and the division on lines 111-112 is a no-op. Not a bug — just a minor inefficiency. Fine to keep for defensive clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/MousePositionInterpolator.swift` around lines 86 - 118, The gaussianSmooth function redundantly accumulates weightSum and divides by it even though gaussianKernel already returns a normalized kernel; remove the weightSum variable and its accumulation in gaussianSmooth, and stop dividing sumX/sumY by weightSum when constructing the new RenderMousePosition — keep the per-sample weight multiplication (weight = kernel[k + radius]) and assign x: sumX and y: sumY directly; reference symbols: gaussianSmooth, gaussianKernel, weightSum, RenderMousePosition.
227-231: SwiftLint: preferwhereclause over innerif.Since
lastTimestampis only mutated inside the loop body (on the "append" path), awhereclause is semantically equivalent and reads more idiomatically.Proposed fix
- for pos in positions.sorted(by: { $0.timestamp < $1.timestamp }) { - if pos.timestamp - lastTimestamp >= minInterval { - result.append(pos) - lastTimestamp = pos.timestamp - } + for pos in positions.sorted(by: { $0.timestamp < $1.timestamp }) + where pos.timestamp - lastTimestamp >= minInterval + { + result.append(pos) + lastTimestamp = pos.timestamp }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/MousePositionInterpolator.swift` around lines 227 - 231, Replace the inner `if` inside the `for pos in positions.sorted(by: { $0.timestamp < $1.timestamp }) { ... }` loop with a `where` clause on the `for` iteration so the body only runs when `pos.timestamp - lastTimestamp >= minInterval`; move the `result.append(pos)` and `lastTimestamp = pos.timestamp` into the loop body and remove the now-unneeded `if` to match SwiftLint's preferred idiom (refer to the loop over `positions.sorted(...)`, `lastTimestamp`, `minInterval`, and `result`).Screenize/Generators/ClickCursorGenerator.swift (1)
6-37: Class nameClickCursorGeneratoris now misleading.The generator no longer processes clicks or generates click-based keyframes — it just emits a single default style keyframe. The name and
KeyframeGeneratorconformance are holdovers from the old pipeline. Consider renaming to something likeDefaultCursorTrackGeneratorwhen convenient.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Generators/ClickCursorGenerator.swift` around lines 6 - 37, The class ClickCursorGenerator is misnamed because it no longer generates click-based keyframes; rename the class and its references to a clearer name (e.g., DefaultCursorTrackGenerator) and update its conformance if needed: change the type name ClickCursorGenerator to DefaultCursorTrackGenerator, update the human-facing name/description properties (name and description) and keep the generate(from: MouseDataSource, settings: GeneratorSettings) -> CursorTrack implementation, ensuring all usages/imports/tests referencing ClickCursorGenerator are updated to the new identifier and KeyframeGenerator conformance is preserved or adjusted accordingly.Screenize/Views/Timeline/TimelineView.swift (3)
327-426:segmentBlockexceeds recommended function body length (87 lines vs 80 limit).SwiftLint flags this. Extracting the resize-handle pair into a small helper view (e.g.,
SegmentHandleView) would bring the function under the limit and improve readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 327 - 426, The function segmentBlock is too long; extract the repeated left/right handle Rectangle code into a small reusable view (e.g., SegmentHandleView) and replace both inline handle blocks with it. Implement SegmentHandleView to accept parameters: isStart (or mode: .resizeStart/.resizeEnd), handleWidth, selectedSegmentID and id (or a isSelected Bool), the onHover callback (calling updateResizeCursor), and a gesture input so you can attach resizeGesture(...) from the caller; ensure it builds the same Rectangle fill/overlay/stroke opacities and exposes a .highPriorityGesture slot so you can pass resizeGesture(for:id:trackType:start:end:mode:snapTargets:editBounds:). Then update segmentBlock to create two SegmentHandleView instances (one for start, one for end), removing the duplicated rectangle code and keeping the rest of segmentBlock identical.
436-477:DragGesture(minimumDistance: 0)on the segment body may double-fire selection alongsideonTapGesture.With
minimumDistance: 0, a plain tap triggers bothonEndedof the drag (line 472-477 — which callsonSegmentSelectand sets selection) and the.onTapGesture(line 412-416 — which also callsonSegmentSelectand sets selection). The result is harmless (selection is set twice to the same value), butonSegmentTimeRangeChangeis also called with the original times, which forces the parent to process a no-op edit.Consider bumping
minimumDistanceto a small positive value (e.g.,2) so taps are exclusively handled by.onTapGestureand only intentional drags trigger the move gesture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 436 - 477, The DragGesture on the segment body uses minimumDistance: 0 which allows a tap to also trigger the drag's onEnded (calling commitInteraction/onSegmentSelect and causing a no-op onSegmentTimeRangeChange); update the DragGesture initializer in the DragGesture(...) call to use a small positive minimumDistance (e.g., 2) so taps are handled only by the existing onTapGesture and only deliberate drags invoke onChanged/onEnded (leave onChanged, onEnded, commitInteraction(for:), onSegmentSelect and selectedSegmentID/selectedSegmentTrackType logic unchanged).
276-325: Repeated segment-block mapping across all three track cases.The camera, cursor, and keystroke branches in
segmentBlocksare structurally identical — they map segments toSegmentRange, then render each viasegmentBlock(...). Consider extracting a helper that accepts any collection of(id, startTime, endTime)tuples and aTrackTypeto eliminate the three-way duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Timeline/TimelineView.swift` around lines 276 - 325, Refactor the duplicated mapping in segmentBlocks by extracting a helper (e.g., renderSegmentBlocks(for:trackType:)) that accepts a collection of segments (or a sequence of tuples with id, startTime, endTime) and a TrackType, builds the SegmentRange array, and calls ForEach to produce segmentBlock(...) with snapTargets(...) and editBounds(...); then replace each AnySegmentTrack case in segmentBlocks with a single call to this helper (referencing segmentBlocks, segmentBlock, SegmentRange, snapTargets, editBounds, AnySegmentTrack and TrackType to locate code).Screenize/Timeline/Track.swift (1)
451-551: Near-identical overlap/segment-management logic betweenCameraTrackandCursorTrackV2.
addSegment,updateSegment,removeSegment,activeSegment, andhasOverlapare essentially copy-pasted between the two types (only differing in the segment element type). Consider extracting this into a shared protocol extension or a genericOverlapGuardedSegmentTrackhelper to reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Timeline/Track.swift` around lines 451 - 551, CameraTrack and CursorTrackV2 duplicate segment management (addSegment, updateSegment, removeSegment, activeSegment, hasOverlap) — factor this into a shared generic implementation: define a protocol (e.g., OverlapGuardedSegmentTrack) or a generic helper that requires a mutable `segments: [S]` property and provides default implementations for addSegment(_ :), updateSegment(_ :), removeSegment(id:), activeSegment(at:), and hasOverlap(_:excluding:) where S has startTime, endTime, and id; then make CameraTrack and CursorTrackV2 conform and remove the duplicated methods. Ensure the generic uses the same overlap logic and sorting used in the initializers and keeps existing symbols (CameraTrack, CursorTrackV2, addSegment, updateSegment, removeSegment, activeSegment, hasOverlap) so callers remain unchanged.Screenize/Models/EventStreamAdapter.swift (1)
160-178: Metadata loading logic is duplicated fromload().Lines 165-178 replicate the metadata-loading path from
load()(Lines 112-119). Consider extracting a sharedloadMetadata(from:interop:)helper to avoid the duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Models/EventStreamAdapter.swift` around lines 160 - 178, The metadata-loading block duplicated in loadUIStateSamples should be extracted into a shared helper (e.g., a static method loadMetadata(from: URL, interop: InteropBlock) -> PolyRecordingMetadata?) and used by both loadUIStateSamples and load(), so remove the duplicated decoding logic inside loadUIStateSamples and call the new helper instead; ensure the helper returns an optional PolyRecordingMetadata and preserves the current failure behavior (returning nil or [] where appropriate) and reference the existing symbols loadUIStateSamples and load() when replacing the duplicated code.Screenize/Render/ExportEngine.swift (1)
82-97: Same double-loading pattern asPreviewEngine— same optimization opportunity applies.Both engines load and decode the event stream data twice. If the recommended refactor in
PreviewEngineis adopted (load source once, then branch), the same approach should be applied here for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/ExportEngine.swift` around lines 82 - 97, ExportEngine currently decodes the event stream twice by calling MouseDataConverter.loadAndConvert and MouseDataConverter.loadAndConvertWithInterpolation; instead, mirror the PreviewEngine refactor: load the raw event/source once (e.g. via MouseDataConverter.loadAndConvert or a new helper that returns the raw decoded events), then produce both raw positions and smoothed positions from that single decoded source (apply the interpolation step on the already-loaded data rather than re-calling loadAndConvertWithInterpolation). Update ExportEngine's logic to call the single load helper and then branch to produce rawMousePositions, clickEvents, and smoothedMousePositions, reusing the same decoded input.Screenize/Render/PreviewEngine.swift (1)
153-163: Mouse data is loaded from disk twice — once for raw, once for smoothed.Both
loadAndConvertandloadAndConvertWithInterpolationinternally callloadMouseDataSourceFromEventStreams, causing duplicate I/O and decoding. Consider loading the source once, converting it, and then interpolating the positions in a second step.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/PreviewEngine.swift` around lines 153 - 163, The code calls MouseDataConverter.loadAndConvert and MouseDataConverter.loadAndConvertWithInterpolation which each re-read and decode the mouse event stream; instead, call the low-level loader once (loadMouseDataSourceFromEventStreams) to obtain the raw event source, then call the conversion step to produce raw positions (used to set rawMousePositions and renderClickEvents) and run the interpolation step on those positions to produce smoothedMousePositions; update MouseDataConverter by adding or using overloads that accept the already-loaded event source or positions (or separate convert() and interpolate() helpers) and replace the two direct load calls in PreviewEngine (where rawMousePositions, renderClickEvents and smoothedMousePositions are assigned) to use the single-load, two-step flow.Screenize/Views/Inspector/TransformInspector.swift (1)
136-143: Consider extracting reusable center X/Y bindings to reduce duplication.The pattern
NormalizedPoint(x: $0, y: keyframe.center.y)/NormalizedPoint(x: keyframe.center.x, y: $0)is repeated acrossCenterPointPicker,Slider, andTextFieldbindings. A pair of computedBinding<CGFloat>properties (e.g.centerXBinding/centerYBinding) would centralize the logic and theonChangecall.Also applies to: 156-163, 175-182
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/TransformInspector.swift` around lines 136 - 143, Several bindings for keyframe.center X/Y are duplicated across CenterPointPicker, Slider, and TextField; create two computed Binding<CGFloat> properties (e.g., centerXBinding and centerYBinding) that get from keyframe.center.x / keyframe.center.y and set by assigning keyframe.center = NormalizedPoint(x: newX, y: existingY) (and vice versa) and invoke the existing onChange there; then replace the inline Binding(...) usages in CenterPointPicker, Slider, and TextField with these computed bindings to centralize logic and remove repetition (ensure the setters call onChange after updating keyframe.center).Screenize/App/AppState.swift (1)
380-441:createProjectcallsbuildCaptureMetaagain, duplicating work when called after recording.In the recording flow,
buildCaptureMetais called inScreenizeApp.createProjectFromRecording()(line 306) and then again insidecreateProject(line 386). Since both share the samepackageInfo.videoURL, the video metadata extraction and captureMeta derivation happen twice. Consider passing the already-computedcaptureMetaintocreateProjectas a parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/App/AppState.swift` around lines 380 - 441, createProject currently recomputes capture metadata by calling buildCaptureMeta again; change its signature to accept an optional CaptureMeta parameter (e.g., createProject(packageInfo: PackageInfo, captureMeta: CaptureMeta? = nil) async -> ScreenizeProject?) and use the passed-in captureMeta if non-nil, falling back to calling buildCaptureMeta only when not provided; update the call site ScreenizeApp.createProjectFromRecording() to pass the already-computed captureMeta instead of relying on the internal buildCaptureMeta so video metadata extraction and captureMeta derivation happen only once.Screenize/Project/PackageManager.swift (1)
69-105: Significant code duplication betweencreatePackageFromVideoandcreatePackageV4.Lines 77–95 and 118–137 are nearly identical (package URL construction, directory creation, path computation, video move). Consider extracting the shared scaffolding into a private helper that both methods call, with
createPackageV4adding the event-stream write and interop block on top.Also applies to: 107-163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Project/PackageManager.swift` around lines 69 - 105, create a private helper (e.g. preparePackageScaffold or buildPackageSkeleton) that encapsulates the duplicated logic in createPackageFromVideo and createPackageV4: construct packageURL, create recordingDir, compute canonicalVideoFilename, videoRelativePath and mouseDataRelativePath, compute destVideoURL and destMouseDataURL, and move the source video when appropriate; have the helper return the values needed by callers (either a PackageInfo or a small struct with packageURL, projectJSONURL, videoURL, mouseDataURL, videoRelativePath, mouseDataRelativePath) so createPackageFromVideo and createPackageV4 call this helper and then only perform their method-specific steps (createPackageFromVideo simply returns the PackageInfo, createPackageV4 writes event-stream and interop block). Ensure the helper uses the same symbols seen here (createPackageFromVideo, createPackageV4, Self.packageExtension, Self.recordingDirectory, Self.canonicalVideoBaseName, Self.canonicalMouseDataFilename, Self.projectFilename, fileManager.moveItem).Screenize/Render/FrameEvaluator.swift (1)
165-188: Remove unusedvelocityandmovementDirectionfields fromCursorState.These fields are documented for motion blur but are never accessed downstream. The rendering pipeline (EffectCompositor) applies motion blur only to transform (zoom/pan), not cursor movement. Since they're hardcoded to 0 and unreachable in the rendering logic, remove them from
CursorStateto eliminate dead code and reduce confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Render/FrameEvaluator.swift` around lines 165 - 188, Remove the unused motion-blur fields from CursorState and update all construction sites accordingly: delete the velocity and movementDirection properties from the CursorState type (and any Codable/Equatable conformance or init that references them), then update FrameEvaluator.evaluateCursor to stop passing velocity and movementDirection when creating a CursorState (it should only set position, style, scale, visible, isClicking, clickType, and clickScaleModifier). Also search for any other references to CursorState.velocity or CursorState.movementDirection and remove or refactor those usages (e.g., in initializers, tests, or other code paths) so the code compiles without the two fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Around line 102-106: The three consecutive bullets that begin with "Use" (the
lines referencing `@MainActor`, Sendable, and async/await) read monotonous; reword
one or two to vary sentence starts while keeping their meaning and references to
`@MainActor`, Sendable, async/await, and singletons like
AppState.shared/ProjectManager.shared intact — e.g., start one with "Annotate
major app/state classes with `@MainActor`", another with "Prefer types that
conform to Sendable and enforce safe concurrency boundaries", or "Adopt
async/await consistently with existing code" so the list flows better without
changing technical content.
In `@Screenize/App/AppState.swift`:
- Around line 362-365: The current scaleFactor expression is a no-op
(display.width / display.width) and always yields 2.0; update the CaptureMeta
creation so scaleFactor computes the actual pixel-to-point scale instead of
dividing the same value by itself — e.g., replace the denominator with a
point-based measurement or use an existing display scale property (use
pixelWidth/pointWidth or display.bounds.width or display.scale if available) to
compute CGFloat(pixelWidth) / CGFloat(pointWidth); if you truly intend a
hardcoded 2.0, simplify by assigning 2.0 directly to scaleFactor.
In `@Screenize/Core/Recording/EventStreamWriter.swift`:
- Around line 154-174: UI state outputs are using raw macOS coordinates while
writeMouseMoves applies a Y-flip and scale; update the code that builds
PolyUIStateEvent in writeUIStates (the samples.map block constructing
cursorX/cursorY, elementFrameX/Y/W/H, caretX/Y/W/H) to convert positions via the
same CoordinateConverter.screenToCapturePixel() logic (i.e. apply the Y-flip
(height - y) and scaleFactor used by writeMouseMoves) so cursorPosition, element
frame origins/sizes and caret bounds are transformed consistently for correct
round-trip with EventStreamAdapter.
In `@Screenize/Project/MediaAsset.swift`:
- Around line 117-121: The decoder currently constructs misleading absolute URLs
via URL(fileURLWithPath: videoRelativePath) and URL(fileURLWithPath:
mouseDataRelativePath); instead set videoURL and mouseDataURL to a clear
sentinel (e.g., URL(string: "about:blank")!) and keep the decoded strings
(videoRelativePath, mouseDataRelativePath) unchanged so resolveURLs(from:) can
compute real URLs later; update MediaAsset's decoder (the init(from:) that sets
videoURL/mouseDataURL) to assign the sentinel placeholder and ensure any
filesExist checks use resolveURLs(from:) first or treat the sentinel as “not
resolved.”
In `@Screenize/Project/PackageManager.swift`:
- Around line 92-95: The video move currently silently skips when the source is
missing or the destination already exists, then returns a PackageInfo that
points at destVideoURL; update createPackageFromVideo and createPackageV4 to
explicitly handle both cases: if fileManager.fileExists(atPath: videoURL.path)
is false, throw an error (unexpected missing source) instead of continuing; if
fileManager.fileExists(atPath: destVideoURL.path) is true, either overwrite the
existing file by removing destVideoURL before moving or throw an error
indicating a destination collision (choose consistent behavior for both
functions); ensure the thrown error types are surfaced to callers so load()’s
videoExists check no longer surprises consumers.
In `@Screenize/Render/FrameEvaluator.swift`:
- Around line 190-203: keystrokeOpacity can produce negative values when time >
segment.endTime because remaining becomes negative; in keystrokeOpacity(for:at:)
clamp remaining to a non-negative value (e.g. use max(0, remaining)) before
dividing by segment.fadeOutDuration and also clamp the final returned opacity to
the 0...1 range to guard against floating-point edge cases; update the logic
within keystrokeOpacity and reference KeystrokeSegment.fadeOutDuration,
.fadeInDuration, .startTime and .endTime when applying the max(0, ...) and final
min/max clamp.
In `@Screenize/ScreenizeApp.swift`:
- Around line 304-308: When buildCaptureMeta(videoURL:) on appState returns nil
in the createProjectFromRecording flow, set the view-model's errorMessage (and
optionally log) before returning so the UI shows failure; specifically update
the branch in createProjectFromRecording that checks guard let captureMeta =
await appState.buildCaptureMeta(videoURL: videoURL) to assign a descriptive
errorMessage (e.g. "Failed to build capture metadata") and then return,
referencing buildCaptureMeta, createProjectFromRecording, isCreatingProject and
errorMessage to locate the code; keep isCreatingProject semantics unchanged but
ensure the errorMessage is set so the spinner user feedback is cleared with an
error.
In `@Screenize/Timeline/Track.swift`:
- Around line 62-94: The initializers for CameraSegment (init(...)),
CursorSegment, and KeystrokeSegment must validate that startTime < endTime to
avoid inverted ranges; update each initializer (CameraSegment.init,
CursorSegment.init, KeystrokeSegment.init) to either precondition(startTime <
endTime) or clamp/normalize the values (e.g., swap or adjust endTime) so the
segment always has a positive duration, and ensure any defaulting preserves this
invariant before assigning to the stored properties.
In `@Screenize/ViewModels/EditorViewModel.swift`:
- Around line 827-861: The current segment endTime logic uses endTime =
max(current.time + 0.001, endTime) which can push a segment past the next
keyframe and cause overlaps; change the logic so only the final segment enforces
a minimum duration. Compute rawEnd = index + 1 < sorted.count ? sorted[index +
1].time : duration, and then set endTime = rawEnd for intermediate segments (so
they never exceed the next keyframe) and only apply max(current.time + 0.001,
rawEnd) for the last segment. Update the loop in convertToCursorTrack (the code
building CursorSegment instances from sorted keyframes/track) to use these
variables so segments do not overlap.
- Around line 808-821: In convertToCameraTrack, avoid creating zero-length or
overlapping CameraSegment instances by first filtering out keyframes with time
>= duration from sorted, then when building segments compute nextTime =
min(duration, sorted[index + 1].time) (or duration for last), set endTime =
min(duration, max(current.time + 0.001, nextTime)) and before appending skip any
segment where startTime >= endTime; this ensures startTime < endTime, clamps to
duration, and prevents overlap between successive segments created in the
segments array.
- Around line 554-567: The addKeystrokeSegment function currently sets
hasUnsavedChanges = true and calls invalidatePreviewCache() redundantly because
addSegment(to:at:) (the caller) already performs those actions; remove the
duplicate lines from addKeystrokeSegment so its behavior matches
addTransformSegment and addCursorSegment, leaving state mutation and cache
invalidation only in addSegment(to:at:), and keep references to
KeystrokeSegment, selectedSegmentID, selectedSegmentTrackType, and
project.timeline.tracks[trackIndex] unchanged.
In `@Screenize/Views/Inspector/InspectorView.swift`:
- Around line 390-414: The setter currently ignores failed updates from
track.updateSegment(updated) causing UI/model desync; change the setter in
cameraSegmentBinding (and cursorSegmentBinding) to capture the previous
track/segment, call track.updateSegment(updated) and if it returns false then
restore the previous state and force a SwiftUI refresh (e.g., reassign
self.timeline.tracks[trackIndex] = .camera(previousTrack) or call
self.onSegmentChange?() / objectWillChange.send() on the model inside
DispatchQueue.main.async) so the UI is reverted to the valid value (or surface a
brief validation message) instead of silently dropping the change.
In `@Screenize/Views/Timeline/TimelineView.swift`:
- Around line 204-219: The smooth-cursor toggle currently uses the same SF
Symbol for both states (so the icon doesn't change); update the Button’s Image
in the timeline.tracks cursor branch to choose a distinct symbol based on
cursorTrack.useSmoothCursor (e.g., use "waveform.path" when false and a clearly
different glyph such as "waveform.path.circle.fill" or
"waveform.path.badge.checkmark" when true), and keep the existing
foregroundStyle and .help logic to reflect the state; change the conditional at
the Image(systemName: ...) in the Button label so the icon string differs when
cursorTrack.useSmoothCursor is true versus false.
---
Outside diff comments:
In `@Screenize/Models/MouseEvent.swift`:
- Around line 322-358: The synthesized Decodable for struct MouseRecording will
fail to decode legacy JSON missing the new properties (scrollEvents,
keyboardEvents, dragEvents, uiStateSamples and possibly
frameRate/scaleFactor/createdAt); implement a custom init(from decoder: Decoder)
in MouseRecording that uses container.decodeIfPresent(...) for those keys and
supplies sensible defaults (empty arrays, default frameRate 60, scaleFactor 1.0,
createdAt = Date() or decodeIfPresent with fallback) so existing load() calls
can decode older files without throwing; locate MouseRecording and replace the
synthesized decoding by adding this init(from:) and keep the existing memberwise
init for creation.
In `@Screenize/ScreenizeApp.swift`:
- Around line 260-282: openVideo currently attempts to create the .screenize
package in the source video's parent directory by calling
PackageManager.shared.createPackageFromVideo which will fail on read-only
volumes; update openVideo to first check whether parentDirectory is writable and
if not prompt the user with a save location (e.g., show a
NSSavePanel/NSSavePanel-like dialog) or default to a known writable directory
such as FileManager.default.urls(for: .documentDirectory,
.userDomainMask).first, then pass that writable URL into createPackageFromVideo;
also replace the simple print in the catch with a user-facing alert/notification
(so failures from ProjectCreator.createFromVideo and projectManager.save surface
to the user) and still log the error for debugging.
In `@Screenize/Views/Inspector/TransformInspector.swift`:
- Around line 156-178: The Slider bindings for keyframe.center are missing the
onChange callback; update the set closures for both X and Y Slider bindings in
TransformInspector.swift so after assigning keyframe.center =
NormalizedPoint(...) they also call onChange?(), mirroring the TextField setters
(i.e., modify the set blocks that currently set keyframe.center to also invoke
onChange?() so slider drags notify the parent).
---
Duplicate comments:
In `@AGENTS.md`:
- Around line 111-123: Update terminology from keyframe/timeline to
segment-based names: replace occurrences of "Keyframe" and "Timeline" with
"Segment" and "SegmentTrack"/"SegmentKeyframe" across code, tests, and docs
(including exported types/interfaces and comments), and rename functions like
addKeyframe/getKeyframes/sortKeyframes to addSegment/getSegments/sortSegments
(or equivalent behavior-first names) while preserving existing behavior and
ordering invariants; ensure references to NormalizedPoint and
CoordinateConverter remain unchanged and update any tests, docs, and API surface
to use the new segment-based names so callers compile and documentation reflects
the segment model.
- Line 23: Replace the hardcoded absolute path
"/Users/sung-yein/Workspace/screenize" in AGENTS.md with a generic
repository-root reference (e.g., "repository root" or use a relative path like
"./" or a placeholder such as "<repo-root>") so the instructions are portable;
update the line "Run all commands from repository root
(`/Users/sung-yein/Workspace/screenize`)." to read something like "Run all
commands from the repository root (e.g., ./ or <repo-root>)."
In `@Screenize/Project/PackageManager.swift`:
- Around line 192-194: The hard check `guard project.version == 5` in
PackageManager.swift rejects older project files; change this to allow a range
(e.g., minimum supported ..= current) and handle older versions by performing an
in-place migration instead of immediately throwing
PackageManagerError.unsupportedProjectVersion(project.version). Implement a
migration path (e.g., a migrateProject(fromVersion:toVersion:) method or
similar) and call it when project.version < currentVersion, and only throw
PackageManagerError.unsupportedProjectVersion if the version is below the
minimum supported; keep the unique symbols `project.version` and
`PackageManagerError.unsupportedProjectVersion` as the touchpoints for this
change.
In `@Screenize/Render/FrameEvaluator.swift`:
- Around line 99-101: rawProgress can escape [0,1] due to FP edge cases; clamp
it before calling segment.interpolation.apply by computing a clamped value
(e.g., clamp rawProgress to 0...1 using min/max) and pass that to
segment.interpolation.apply instead of rawProgress (keep the existing duration
handling and max(0.001, ...) logic intact).
In `@Screenize/Render/MouseDataConverter.swift`:
- Around line 26-54: When handling a .leftDown/.rightDown in the loop over
sortedClicks, existing pendingDownByType entries are currently overwritten
(losing the first down); before assigning pendingDownByType[click.clickType] =
click check for an existing down (pendingDownByType[click.clickType]) and if
present emit a RenderClickEvent for that stale down (use its time/position, a
minimal duration like 0.03, and map ClickEventData.ClickType to ClickType) then
replace it with the new down. Update the logic in the switch branch that handles
.leftDown/.rightDown (and use the same RenderClickEvent shape used elsewhere) so
no down is silently dropped.
In `@Screenize/ViewModels/EditorViewModel.swift`:
- Line 643: The undo snapshot is being saved before validation, causing rejected
updates to pollute the undo stack; change the flow so saveUndoSnapshot() is only
called after track.updateSegment(segment) succeeds (or after successful
validation/commit) — locate the calls to saveUndoSnapshot() paired with
track.updateSegment(segment) in EditorViewModel (the occurrences at the current
spot and the similar callers around the other mentions) and move or guard those
saveUndoSnapshot() invocations so they run only on a successful
update/validation result (or inside the success branch), ensuring no snapshot is
pushed when updateSegment fails.
- Line 514: track.addSegment(newSegment) currently returns a Bool which is being
ignored and selectedSegmentID is set unconditionally; change the code to capture
the Bool result from track.addSegment(newSegment) and only set selectedSegmentID
= newSegment.id when the call returns true (i.e., segment was actually added).
Update both occurrences that call track.addSegment(newSegment) (the one that
currently discards the result and the duplicate at the other spot) so you check
the returned Bool and handle the failure case appropriately (e.g., do not change
selectedSegmentID and optionally handle/log the failure).
- Around line 497-519: In addTransformSegment(at:) ensure you early-return if
time >= duration and stop using the max(...) that can push the segment past the
video end: add guard time < duration else { return }, compute let endTime =
min(duration, time + 1.0) and use that endTime directly when creating
CameraSegment (remove the max(time + 0.05, endTime) expression) so
newSegment.endTime is always clamped to duration.
---
Nitpick comments:
In `@Screenize/App/AppState.swift`:
- Around line 380-441: createProject currently recomputes capture metadata by
calling buildCaptureMeta again; change its signature to accept an optional
CaptureMeta parameter (e.g., createProject(packageInfo: PackageInfo,
captureMeta: CaptureMeta? = nil) async -> ScreenizeProject?) and use the
passed-in captureMeta if non-nil, falling back to calling buildCaptureMeta only
when not provided; update the call site
ScreenizeApp.createProjectFromRecording() to pass the already-computed
captureMeta instead of relying on the internal buildCaptureMeta so video
metadata extraction and captureMeta derivation happen only once.
In `@Screenize/Generators/ClickCursorGenerator.swift`:
- Around line 6-37: The class ClickCursorGenerator is misnamed because it no
longer generates click-based keyframes; rename the class and its references to a
clearer name (e.g., DefaultCursorTrackGenerator) and update its conformance if
needed: change the type name ClickCursorGenerator to
DefaultCursorTrackGenerator, update the human-facing name/description properties
(name and description) and keep the generate(from: MouseDataSource, settings:
GeneratorSettings) -> CursorTrack implementation, ensuring all
usages/imports/tests referencing ClickCursorGenerator are updated to the new
identifier and KeyframeGenerator conformance is preserved or adjusted
accordingly.
In `@Screenize/Generators/SmartZoom/SessionCenterResolver.swift`:
- Line 262: The comment "// Check the time interval" in the for loop inside
SessionCenterResolver.swift is over-indented compared to surrounding lines;
locate the for loop in the SessionCenterResolver (e.g., inside the method where
sessions are iterated) and left-align that comment to the same indentation level
as the adjacent statements in the loop body so it matches the file's indentation
style.
- Line 200: The code is re-wrapping NormalizedPoint instances unnecessarily; in
SessionCenterResolver (variables lastMoveCenter, newCenter, currentCenter) stop
constructing new NormalizedPoint(x: ..., y: ...) and pass the existing
NormalizedPoint variables directly (e.g., use center: lastMoveCenter instead of
center: NormalizedPoint(x: lastMoveCenter.x, y: lastMoveCenter.y)); update the
four occurrences referenced (around the center assignments) to use the variables
themselves to remove redundant deconstruction/re-wrapping.
In `@Screenize/Generators/SmartZoomGenerator.swift`:
- Around line 174-179: The code is redundantly reconstructing NormalizedPoint
instances (e.g., NormalizedPoint(x: sessionCenter.x, y: sessionCenter.y)) when
you already have a NormalizedPoint; update the TransformKeyframe constructions
to pass the existing NormalizedPoint directly (use sessionCenter,
effectiveCenter, nextCenter as the center argument) in the SmartZoomGenerator
code paths that build keyframes (the TransformKeyframe initializers around the
zoomInEndTime and the places noted with effectiveCenter/nextCenter) so you stop
creating unnecessary copies.
In `@Screenize/Models/EventStreamAdapter.swift`:
- Around line 160-178: The metadata-loading block duplicated in
loadUIStateSamples should be extracted into a shared helper (e.g., a static
method loadMetadata(from: URL, interop: InteropBlock) -> PolyRecordingMetadata?)
and used by both loadUIStateSamples and load(), so remove the duplicated
decoding logic inside loadUIStateSamples and call the new helper instead; ensure
the helper returns an optional PolyRecordingMetadata and preserves the current
failure behavior (returning nil or [] where appropriate) and reference the
existing symbols loadUIStateSamples and load() when replacing the duplicated
code.
In `@Screenize/Models/MouseEvent.swift`:
- Line 360: Remove the stale comment "// MARK: - Codable" that remains after you
removed the custom Codable implementation in MouseEvent.swift; either delete the
marker entirely or move a brief "Codable"/comment annotation up to the
MouseEvent struct declaration to indicate synthesized conformance (refer to the
MouseEvent type) so there is no orphaned section header with no code beneath it.
In `@Screenize/Project/PackageManager.swift`:
- Around line 69-105: create a private helper (e.g. preparePackageScaffold or
buildPackageSkeleton) that encapsulates the duplicated logic in
createPackageFromVideo and createPackageV4: construct packageURL, create
recordingDir, compute canonicalVideoFilename, videoRelativePath and
mouseDataRelativePath, compute destVideoURL and destMouseDataURL, and move the
source video when appropriate; have the helper return the values needed by
callers (either a PackageInfo or a small struct with packageURL, projectJSONURL,
videoURL, mouseDataURL, videoRelativePath, mouseDataRelativePath) so
createPackageFromVideo and createPackageV4 call this helper and then only
perform their method-specific steps (createPackageFromVideo simply returns the
PackageInfo, createPackageV4 writes event-stream and interop block). Ensure the
helper uses the same symbols seen here (createPackageFromVideo, createPackageV4,
Self.packageExtension, Self.recordingDirectory, Self.canonicalVideoBaseName,
Self.canonicalMouseDataFilename, Self.projectFilename, fileManager.moveItem).
In `@Screenize/Render/ExportEngine.swift`:
- Around line 82-97: ExportEngine currently decodes the event stream twice by
calling MouseDataConverter.loadAndConvert and
MouseDataConverter.loadAndConvertWithInterpolation; instead, mirror the
PreviewEngine refactor: load the raw event/source once (e.g. via
MouseDataConverter.loadAndConvert or a new helper that returns the raw decoded
events), then produce both raw positions and smoothed positions from that single
decoded source (apply the interpolation step on the already-loaded data rather
than re-calling loadAndConvertWithInterpolation). Update ExportEngine's logic to
call the single load helper and then branch to produce rawMousePositions,
clickEvents, and smoothedMousePositions, reusing the same decoded input.
In `@Screenize/Render/FrameEvaluator.swift`:
- Around line 165-188: Remove the unused motion-blur fields from CursorState and
update all construction sites accordingly: delete the velocity and
movementDirection properties from the CursorState type (and any
Codable/Equatable conformance or init that references them), then update
FrameEvaluator.evaluateCursor to stop passing velocity and movementDirection
when creating a CursorState (it should only set position, style, scale, visible,
isClicking, clickType, and clickScaleModifier). Also search for any other
references to CursorState.velocity or CursorState.movementDirection and remove
or refactor those usages (e.g., in initializers, tests, or other code paths) so
the code compiles without the two fields.
In `@Screenize/Render/MousePositionInterpolator.swift`:
- Around line 86-118: The gaussianSmooth function redundantly accumulates
weightSum and divides by it even though gaussianKernel already returns a
normalized kernel; remove the weightSum variable and its accumulation in
gaussianSmooth, and stop dividing sumX/sumY by weightSum when constructing the
new RenderMousePosition — keep the per-sample weight multiplication (weight =
kernel[k + radius]) and assign x: sumX and y: sumY directly; reference symbols:
gaussianSmooth, gaussianKernel, weightSum, RenderMousePosition.
- Around line 227-231: Replace the inner `if` inside the `for pos in
positions.sorted(by: { $0.timestamp < $1.timestamp }) { ... }` loop with a
`where` clause on the `for` iteration so the body only runs when `pos.timestamp
- lastTimestamp >= minInterval`; move the `result.append(pos)` and
`lastTimestamp = pos.timestamp` into the loop body and remove the now-unneeded
`if` to match SwiftLint's preferred idiom (refer to the loop over
`positions.sorted(...)`, `lastTimestamp`, `minInterval`, and `result`).
In `@Screenize/Render/PreviewEngine.swift`:
- Around line 153-163: The code calls MouseDataConverter.loadAndConvert and
MouseDataConverter.loadAndConvertWithInterpolation which each re-read and decode
the mouse event stream; instead, call the low-level loader once
(loadMouseDataSourceFromEventStreams) to obtain the raw event source, then call
the conversion step to produce raw positions (used to set rawMousePositions and
renderClickEvents) and run the interpolation step on those positions to produce
smoothedMousePositions; update MouseDataConverter by adding or using overloads
that accept the already-loaded event source or positions (or separate convert()
and interpolate() helpers) and replace the two direct load calls in
PreviewEngine (where rawMousePositions, renderClickEvents and
smoothedMousePositions are assigned) to use the single-load, two-step flow.
In `@Screenize/Timeline/Track.swift`:
- Around line 451-551: CameraTrack and CursorTrackV2 duplicate segment
management (addSegment, updateSegment, removeSegment, activeSegment, hasOverlap)
— factor this into a shared generic implementation: define a protocol (e.g.,
OverlapGuardedSegmentTrack) or a generic helper that requires a mutable
`segments: [S]` property and provides default implementations for addSegment(_
:), updateSegment(_ :), removeSegment(id:), activeSegment(at:), and
hasOverlap(_:excluding:) where S has startTime, endTime, and id; then make
CameraTrack and CursorTrackV2 conform and remove the duplicated methods. Ensure
the generic uses the same overlap logic and sorting used in the initializers and
keeps existing symbols (CameraTrack, CursorTrackV2, addSegment, updateSegment,
removeSegment, activeSegment, hasOverlap) so callers remain unchanged.
In `@Screenize/Views/Inspector/TransformInspector.swift`:
- Around line 136-143: Several bindings for keyframe.center X/Y are duplicated
across CenterPointPicker, Slider, and TextField; create two computed
Binding<CGFloat> properties (e.g., centerXBinding and centerYBinding) that get
from keyframe.center.x / keyframe.center.y and set by assigning keyframe.center
= NormalizedPoint(x: newX, y: existingY) (and vice versa) and invoke the
existing onChange there; then replace the inline Binding(...) usages in
CenterPointPicker, Slider, and TextField with these computed bindings to
centralize logic and remove repetition (ensure the setters call onChange after
updating keyframe.center).
In `@Screenize/Views/Timeline/TimelineView.swift`:
- Around line 327-426: The function segmentBlock is too long; extract the
repeated left/right handle Rectangle code into a small reusable view (e.g.,
SegmentHandleView) and replace both inline handle blocks with it. Implement
SegmentHandleView to accept parameters: isStart (or mode:
.resizeStart/.resizeEnd), handleWidth, selectedSegmentID and id (or a isSelected
Bool), the onHover callback (calling updateResizeCursor), and a gesture input so
you can attach resizeGesture(...) from the caller; ensure it builds the same
Rectangle fill/overlay/stroke opacities and exposes a .highPriorityGesture slot
so you can pass
resizeGesture(for:id:trackType:start:end:mode:snapTargets:editBounds:). Then
update segmentBlock to create two SegmentHandleView instances (one for start,
one for end), removing the duplicated rectangle code and keeping the rest of
segmentBlock identical.
- Around line 436-477: The DragGesture on the segment body uses minimumDistance:
0 which allows a tap to also trigger the drag's onEnded (calling
commitInteraction/onSegmentSelect and causing a no-op onSegmentTimeRangeChange);
update the DragGesture initializer in the DragGesture(...) call to use a small
positive minimumDistance (e.g., 2) so taps are handled only by the existing
onTapGesture and only deliberate drags invoke onChanged/onEnded (leave
onChanged, onEnded, commitInteraction(for:), onSegmentSelect and
selectedSegmentID/selectedSegmentTrackType logic unchanged).
- Around line 276-325: Refactor the duplicated mapping in segmentBlocks by
extracting a helper (e.g., renderSegmentBlocks(for:trackType:)) that accepts a
collection of segments (or a sequence of tuples with id, startTime, endTime) and
a TrackType, builds the SegmentRange array, and calls ForEach to produce
segmentBlock(...) with snapTargets(...) and editBounds(...); then replace each
AnySegmentTrack case in segmentBlocks with a single call to this helper
(referencing segmentBlocks, segmentBlock, SegmentRange, snapTargets, editBounds,
AnySegmentTrack and TrackType to locate code).
| - Prefer strong, domain-specific types over untyped dictionaries. | ||
| - Use `@MainActor` for major app/state classes and UI-facing state. | ||
| - Use `Sendable` and safe queue/concurrency boundaries where relevant. | ||
| - Use async/await patterns consistently with existing code. | ||
| - Keep singleton usage aligned with existing architecture (`AppState.shared`, `ProjectManager.shared`). |
There was a problem hiding this comment.
Vary sentence starts for readability.
Three consecutive bullets start with “Use”, which reads monotonous. Consider rephrasing one or two.
🧰 Tools
🪛 LanguageTool
[style] ~105-~105: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...oncurrency boundaries where relevant. - Use async/await patterns consistently with ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` around lines 102 - 106, The three consecutive bullets that begin
with "Use" (the lines referencing `@MainActor`, Sendable, and async/await) read
monotonous; reword one or two to vary sentence starts while keeping their
meaning and references to `@MainActor`, Sendable, async/await, and singletons like
AppState.shared/ProjectManager.shared intact — e.g., start one with "Annotate
major app/state classes with `@MainActor`", another with "Prefer types that
conform to Sendable and enforce safe concurrency boundaries", or "Adopt
async/await consistently with existing code" so the list flows better without
changing technical content.
| let events: [PolyUIStateEvent] = samples.map { sample in | ||
| let offsetMs = Int64(sample.timestamp * 1000) | ||
| return PolyUIStateEvent( | ||
| processTimeMs: processTimeStartMs + offsetMs, | ||
| unixTimeMs: unixStartMs + offsetMs, | ||
| cursorX: Double(sample.cursorPosition.x), | ||
| cursorY: Double(sample.cursorPosition.y), | ||
| elementRole: sample.elementInfo?.role, | ||
| elementSubrole: sample.elementInfo?.subrole, | ||
| elementTitle: sample.elementInfo?.title, | ||
| elementAppName: sample.elementInfo?.applicationName, | ||
| elementFrameX: sample.elementInfo.map { Double($0.frame.origin.x) }, | ||
| elementFrameY: sample.elementInfo.map { Double($0.frame.origin.y) }, | ||
| elementFrameW: sample.elementInfo.map { Double($0.frame.width) }, | ||
| elementFrameH: sample.elementInfo.map { Double($0.frame.height) }, | ||
| elementIsClickable: sample.elementInfo?.isClickable, | ||
| caretX: sample.caretBounds.map { Double($0.origin.x) }, | ||
| caretY: sample.caretBounds.map { Double($0.origin.y) }, | ||
| caretW: sample.caretBounds.map { Double($0.width) }, | ||
| caretH: sample.caretBounds.map { Double($0.height) } | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how UIStateSample.cursorPosition is populated to determine its coordinate system
rg -n "UIStateSample(" -C5 --type swift -g '!*Test*' | head -100Repository: syi0808/screenize
Length of output: 139
🏁 Script executed:
#!/bin/bash
# Look for where cursorPosition is set in UIStateSample
rg -n "cursorPosition" --type swift -g '!*Test*' -B2 -A2 | head -150Repository: syi0808/screenize
Length of output: 3937
🏁 Script executed:
#!/bin/bash
# Check the UIStateSample struct definition
rg -n "struct UIStateSample" -A30 --type swiftRepository: syi0808/screenize
Length of output: 2416
🏁 Script executed:
#!/bin/bash
# Look for coordinate transformation logic, especially Y-flip and scale factor usage
rg -n "scaleFactor|height -" --type swift -B2 -A2 | head -100Repository: syi0808/screenize
Length of output: 7154
🏁 Script executed:
#!/bin/bash
# Find relativePosition calculation in MouseDataRecorder
rg -n "relativePosition" --type swift -B5 -A5 Screenize/Core/Recording/MouseDataRecorder.swiftRepository: syi0808/screenize
Length of output: 2203
🏁 Script executed:
#!/bin/bash
# Check the writeMouseMoves implementation to see coordinate transformation
rg -n "writeMouseMoves" -A20 --type swift Screenize/Core/Recording/EventStreamWriter.swift | head -50Repository: syi0808/screenize
Length of output: 1961
🏁 Script executed:
#!/bin/bash
# Find convertToScreenBounds implementation
rg -n "convertToScreenBounds" --type swift -B2 -A10 | head -80Repository: syi0808/screenize
Length of output: 7687
🏁 Script executed:
#!/bin/bash
# Also look at the screenBounds property used in convertToScreenBounds
rg -n "screenBounds" --type swift -B1 -A1 Screenize/Core/Recording/MouseDataRecorder.swift | head -60Repository: syi0808/screenize
Length of output: 2438
🏁 Script executed:
#!/bin/bash
# Find CoordinateConverter implementation to understand screenToCapturePixel
rg -n "class CoordinateConverter\|struct CoordinateConverter" -A30 --type swiftRepository: syi0808/screenize
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Look for screenToCapturePixel implementation
rg -n "screenToCapturePixel" --type swift -B2 -A10Repository: syi0808/screenize
Length of output: 5325
Cursor position and UI element bounds are written without Y-flip, unlike writeMouseMoves.
writeMouseMoves (line 92) applies (height - pos.y) * scaleFactor to convert from macOS bottom-left-origin coordinates to top-left-origin pixel coordinates. However, writeUIStates passes cursorPosition, elementFrame, and caretBounds through verbatim.
CoordinateConverter.screenToCapturePixel() preserves the macOS bottom-left-origin coordinate system (no Y-flip). For consistency with mouse events and correct round-trip serialization via EventStreamAdapter, apply the same coordinate transformation to all UI state fields.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Core/Recording/EventStreamWriter.swift` around lines 154 - 174, UI
state outputs are using raw macOS coordinates while writeMouseMoves applies a
Y-flip and scale; update the code that builds PolyUIStateEvent in writeUIStates
(the samples.map block constructing cursorX/cursorY, elementFrameX/Y/W/H,
caretX/Y/W/H) to convert positions via the same
CoordinateConverter.screenToCapturePixel() logic (i.e. apply the Y-flip (height
- y) and scaleFactor used by writeMouseMoves) so cursorPosition, element frame
origins/sizes and caret bounds are transformed consistently for correct
round-trip with EventStreamAdapter.
| videoRelativePath = try container.decode(String.self, forKey: .videoRelativePath) | ||
| mouseDataRelativePath = try container.decode(String.self, forKey: .mouseDataRelativePath) | ||
| // Placeholder URLs - must be resolved by caller via resolveURLs(from:) | ||
| videoURL = URL(fileURLWithPath: videoRelativePath) | ||
| mouseDataURL = URL(fileURLWithPath: mouseDataRelativePath) |
There was a problem hiding this comment.
Placeholder URLs from relative paths are misleading before resolution.
URL(fileURLWithPath: videoRelativePath) resolves the relative path against the process's current working directory, producing a valid-looking but incorrect absolute URL. Any code that reads videoURL or calls filesExist before resolveURLs(from:) will silently get wrong results.
A safer placeholder (e.g., an empty URL(string: "about:blank")! or a dedicated sentinel) would make misuse more obvious, though admittedly the load path in PackageManager.load(from:) does call resolveURLs promptly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Project/MediaAsset.swift` around lines 117 - 121, The decoder
currently constructs misleading absolute URLs via URL(fileURLWithPath:
videoRelativePath) and URL(fileURLWithPath: mouseDataRelativePath); instead set
videoURL and mouseDataURL to a clear sentinel (e.g., URL(string:
"about:blank")!) and keep the decoded strings (videoRelativePath,
mouseDataRelativePath) unchanged so resolveURLs(from:) can compute real URLs
later; update MediaAsset's decoder (the init(from:) that sets
videoURL/mouseDataURL) to assign the sentinel placeholder and ensure any
filesExist checks use resolveURLs(from:) first or treat the sentinel as “not
resolved.”
| if fileManager.fileExists(atPath: videoURL.path), | ||
| !fileManager.fileExists(atPath: destVideoURL.path) { | ||
| try fileManager.moveItem(at: videoURL, to: destVideoURL) | ||
| } |
There was a problem hiding this comment.
Video move silently skipped when source is missing or destination exists.
Both createPackageFromVideo and createPackageV4 silently skip the video move when the source file doesn't exist or the destination already exists, yet still return a PackageInfo referencing destVideoURL. If the source was already consumed (e.g., by a prior failed attempt that created the directory but crashed before writing project.json), the caller receives a PackageInfo pointing to a nonexistent video — the error only surfaces later in load() at the videoExists check.
Consider throwing when the source doesn't exist (unexpected state) and handling the "destination already exists" case explicitly (e.g., overwrite or error).
Also applies to: 134-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Project/PackageManager.swift` around lines 92 - 95, The video move
currently silently skips when the source is missing or the destination already
exists, then returns a PackageInfo that points at destVideoURL; update
createPackageFromVideo and createPackageV4 to explicitly handle both cases: if
fileManager.fileExists(atPath: videoURL.path) is false, throw an error
(unexpected missing source) instead of continuing; if
fileManager.fileExists(atPath: destVideoURL.path) is true, either overwrite the
existing file by removing destVideoURL before moving or throw an error
indicating a destination collision (choose consistent behavior for both
functions); ensure the thrown error types are surfaced to callers so load()’s
videoExists check no longer surprises consumers.
| let newSegment = KeystrokeSegment( | ||
| startTime: time, | ||
| endTime: min(duration, time + 1.5), | ||
| displayText: "⌘" | ||
| ) | ||
|
|
||
| track.keyframes.append(newKeyframe) | ||
| track.keyframes.sort { $0.time < $1.time } | ||
| track.addSegment(newSegment) | ||
|
|
||
| project.timeline.tracks[trackIndex] = .keystroke(track) | ||
| selectedKeyframeID = newKeyframe.id | ||
| selectedTrackType = .keystroke | ||
| selectedSegmentID = newSegment.id | ||
| selectedSegmentTrackType = .keystroke | ||
| hasUnsavedChanges = true | ||
| invalidatePreviewCache() | ||
| } |
There was a problem hiding this comment.
addKeystrokeSegment redundantly sets hasUnsavedChanges and calls invalidatePreviewCache.
Lines 565-566 set hasUnsavedChanges = true and call invalidatePreviewCache(), but the caller addSegment(to:at:) (lines 493-494) does the same immediately after. The other add methods (addTransformSegment, addCursorSegment) don't duplicate this — only the keystroke path does.
Remove the duplicate calls
track.addSegment(newSegment)
project.timeline.tracks[trackIndex] = .keystroke(track)
selectedSegmentID = newSegment.id
selectedSegmentTrackType = .keystroke
- hasUnsavedChanges = true
- invalidatePreviewCache()
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let newSegment = KeystrokeSegment( | |
| startTime: time, | |
| endTime: min(duration, time + 1.5), | |
| displayText: "⌘" | |
| ) | |
| track.keyframes.append(newKeyframe) | |
| track.keyframes.sort { $0.time < $1.time } | |
| track.addSegment(newSegment) | |
| project.timeline.tracks[trackIndex] = .keystroke(track) | |
| selectedKeyframeID = newKeyframe.id | |
| selectedTrackType = .keystroke | |
| selectedSegmentID = newSegment.id | |
| selectedSegmentTrackType = .keystroke | |
| hasUnsavedChanges = true | |
| invalidatePreviewCache() | |
| } | |
| let newSegment = KeystrokeSegment( | |
| startTime: time, | |
| endTime: min(duration, time + 1.5), | |
| displayText: "⌘" | |
| ) | |
| track.addSegment(newSegment) | |
| project.timeline.tracks[trackIndex] = .keystroke(track) | |
| selectedSegmentID = newSegment.id | |
| selectedSegmentTrackType = .keystroke | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/ViewModels/EditorViewModel.swift` around lines 554 - 567, The
addKeystrokeSegment function currently sets hasUnsavedChanges = true and calls
invalidatePreviewCache() redundantly because addSegment(to:at:) (the caller)
already performs those actions; remove the duplicate lines from
addKeystrokeSegment so its behavior matches addTransformSegment and
addCursorSegment, leaving state mutation and cache invalidation only in
addSegment(to:at:), and keep references to KeystrokeSegment, selectedSegmentID,
selectedSegmentTrackType, and project.timeline.tracks[trackIndex] unchanged.
| let sorted = (track.styleKeyframes ?? []).sorted { $0.time < $1.time } | ||
|
|
||
| return ClickEventData( | ||
| time: click.timestamp, | ||
| x: normalizedX, | ||
| y: normalizedY, | ||
| clickType: clickType, | ||
| appBundleID: click.targetElement?.applicationName, | ||
| elementInfo: click.targetElement | ||
| guard !sorted.isEmpty else { | ||
| return CursorTrackV2( | ||
| id: track.id, | ||
| name: track.name, | ||
| isEnabled: track.isEnabled, | ||
| segments: [ | ||
| CursorSegment( | ||
| startTime: 0, | ||
| endTime: duration, | ||
| style: track.defaultStyle, | ||
| visible: track.defaultVisible, | ||
| scale: track.defaultScale | ||
| ), | ||
| ] | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| var keyboardEvents: [KeyboardEventData] { | ||
| recording.keyboardEvents.map { event in | ||
| let eventType: KeyboardEventData.EventType | ||
| switch event.type { | ||
| case .keyDown: | ||
| eventType = .keyDown | ||
| case .keyUp: | ||
| eventType = .keyUp | ||
| } | ||
|
|
||
| var modifiers: KeyboardEventData.ModifierFlags = [] | ||
| if event.modifiers.shift { modifiers.insert(.shift) } | ||
| if event.modifiers.control { modifiers.insert(.control) } | ||
| if event.modifiers.option { modifiers.insert(.option) } | ||
| if event.modifiers.command { modifiers.insert(.command) } | ||
|
|
||
| return KeyboardEventData( | ||
| time: event.timestamp, | ||
| keyCode: event.keyCode, | ||
| eventType: eventType, | ||
| modifiers: modifiers, | ||
| character: event.character | ||
| var segments: [CursorSegment] = [] | ||
| for index in 0..<sorted.count { | ||
| let current = sorted[index] | ||
| let endTime = index + 1 < sorted.count ? sorted[index + 1].time : duration | ||
| segments.append( | ||
| CursorSegment( | ||
| startTime: current.time, | ||
| endTime: max(current.time + 0.001, endTime), | ||
| style: current.style, | ||
| visible: current.visible, | ||
| scale: current.scale | ||
| ) | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| var dragEvents: [DragEventData] { | ||
| recording.dragEvents.map { drag in | ||
| // Convert screen coordinates to normalized space | ||
| let normalizedStartX = drag.startX / recording.screenBounds.width | ||
| let normalizedStartY = drag.startY / recording.screenBounds.height | ||
| let normalizedEndX = drag.endX / recording.screenBounds.width | ||
| let normalizedEndY = drag.endY / recording.screenBounds.height | ||
|
|
||
| // Convert the drag type | ||
| let dragType: DragEventData.DragType | ||
| switch drag.type { | ||
| case .selection: | ||
| dragType = .selection | ||
| case .move: | ||
| dragType = .move | ||
| case .resize: | ||
| dragType = .resize | ||
| } | ||
| return CursorTrackV2(id: track.id, name: track.name, isEnabled: track.isEnabled, segments: segments) |
There was a problem hiding this comment.
convertToCursorTrack segments may overlap when style keyframes are close together.
Line 853: endTime = max(current.time + 0.001, endTime) guarantees a minimum duration of 0.001s for each segment. But if two keyframes are less than 0.001s apart, the first segment's guaranteed minimum end will overlap the second segment's start, which could cause addSegment overlap checks to reject them later.
Since this is a one-time migration path (legacy → segment), this is a minor edge case unlikely to occur in practice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/ViewModels/EditorViewModel.swift` around lines 827 - 861, The
current segment endTime logic uses endTime = max(current.time + 0.001, endTime)
which can push a segment past the next keyframe and cause overlaps; change the
logic so only the final segment enforces a minimum duration. Compute rawEnd =
index + 1 < sorted.count ? sorted[index + 1].time : duration, and then set
endTime = rawEnd for intermediate segments (so they never exceed the next
keyframe) and only apply max(current.time + 0.001, rawEnd) for the last segment.
Update the loop in convertToCursorTrack (the code building CursorSegment
instances from sorted keyframes/track) to use these variables so segments do not
overlap.
| func cameraSegmentBinding(for id: UUID) -> Binding<CameraSegment>? { | ||
| guard let trackIndex = timeline.tracks.firstIndex(where: { $0.trackType == .transform }), | ||
| case .transform(let track) = timeline.tracks[trackIndex], | ||
| track.keyframes.contains(where: { $0.id == id }) else { | ||
| case .camera(let track) = timeline.tracks[trackIndex], | ||
| track.segments.contains(where: { $0.id == id }) else { | ||
| return nil | ||
| } | ||
|
|
||
| return Binding( | ||
| get: { | ||
| // Re-find the index by ID each time | ||
| if case .transform(let track) = self.timeline.tracks[trackIndex], | ||
| let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| return track.keyframes[keyframeIndex] | ||
| if case .camera(let track) = self.timeline.tracks[trackIndex], | ||
| let segmentIndex = track.segments.firstIndex(where: { $0.id == id }) { | ||
| return track.segments[segmentIndex] | ||
| } | ||
| return TransformKeyframe(time: 0, zoom: 1.0, centerX: 0.5, centerY: 0.5) | ||
| }, | ||
| set: { newValue in | ||
| // Re-find the index by ID each time | ||
| if case .transform(var track) = self.timeline.tracks[trackIndex], | ||
| let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| track.keyframes[keyframeIndex] = newValue | ||
| self.timeline.tracks[trackIndex] = .transform(track) | ||
| } | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| private func rippleKeyframeBinding(for id: UUID) -> Binding<RippleKeyframe>? { | ||
| // Initial validation (ensure the keyframe exists) | ||
| guard let trackIndex = timeline.tracks.firstIndex(where: { $0.trackType == .ripple }), | ||
| case .ripple(let track) = timeline.tracks[trackIndex], | ||
| track.keyframes.contains(where: { $0.id == id }) else { | ||
| return nil | ||
| } | ||
|
|
||
| return Binding( | ||
| get: { | ||
| // Re-find the index by ID each time | ||
| if case .ripple(let track) = self.timeline.tracks[trackIndex], | ||
| let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| return track.keyframes[keyframeIndex] | ||
| } | ||
| return RippleKeyframe(time: 0, x: 0.5, y: 0.5) | ||
| return CameraSegment(startTime: 0, endTime: 1, startTransform: .identity, endTransform: .identity) | ||
| }, | ||
| set: { newValue in | ||
| // Re-find the index by ID each time | ||
| if case .ripple(var track) = self.timeline.tracks[trackIndex], | ||
| let keyframeIndex = track.keyframes.firstIndex(where: { $0.id == id }) { | ||
| track.keyframes[keyframeIndex] = newValue | ||
| self.timeline.tracks[trackIndex] = .ripple(track) | ||
| set: { updated in | ||
| if case .camera(var track) = self.timeline.tracks[trackIndex], | ||
| track.updateSegment(updated) { | ||
| self.timeline.tracks[trackIndex] = .camera(track) | ||
| self.onSegmentChange?() | ||
| } | ||
| } | ||
| ) | ||
| } |
There was a problem hiding this comment.
Segment binding setter silently drops rejected updates — UI can desync from model.
In cameraSegmentBinding (and similarly cursorSegmentBinding), the setter calls track.updateSegment(updated) which returns Bool. When it returns false (e.g., overlap), the binding silently does nothing, but the SwiftUI TextField/Slider that triggered the change may still display the rejected value. This creates a UI-model desync until the next view refresh.
This is most apparent with the timeRangeFields (line 364): a user types a start/end time that overlaps another segment, the update is silently rejected, but the text field shows the rejected value.
Consider either forcing a view update on rejection or surfacing a brief validation message.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Views/Inspector/InspectorView.swift` around lines 390 - 414, The
setter currently ignores failed updates from track.updateSegment(updated)
causing UI/model desync; change the setter in cameraSegmentBinding (and
cursorSegmentBinding) to capture the previous track/segment, call
track.updateSegment(updated) and if it returns false then restore the previous
state and force a SwiftUI refresh (e.g., reassign
self.timeline.tracks[trackIndex] = .camera(previousTrack) or call
self.onSegmentChange?() / objectWillChange.send() on the model inside
DispatchQueue.main.async) so the UI is reverted to the valid value (or surface a
brief validation message) instead of silently dropping the change.
… interpolation components.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
Screenize/Core/Recording/RecordingCoordinator.swift (1)
64-67: Remove the unusedbackgroundStyleparameter from thestartRecordingmethod signature.The parameter is accepted on line 66 but never referenced in the method body (lines 67–113). It's not passed to
RecordingSession,CaptureConfiguration,ScreenCaptureManager, or any downstream function. This misleads callers into thinking it influences recording behavior and should either be wired into the capture pipeline or removed entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Core/Recording/RecordingCoordinator.swift` around lines 64 - 67, The startRecording(target:backgroundStyle:) method declares an unused backgroundStyle parameter; remove backgroundStyle from the method signature and all corresponding call sites to avoid misleading API surface, or alternatively thread it into the recording pipeline by passing it into RecordingSession, CaptureConfiguration, and/or ScreenCaptureManager where capture setup occurs (look for RecordingSession initializer, CaptureConfiguration creation, and ScreenCaptureManager.start/... methods) so the value is actually consumed; preferred fix is to delete the unused backgroundStyle parameter from startRecording and update every caller to call startRecording(target:) and any related protocol/override signatures.Screenize/Views/Inspector/InspectorView.swift (1)
398-404: Fallback segments in binding getters have mismatched IDs — could mask bugs.When the segment is not found at runtime, the getter returns a brand-new default segment (e.g.,
CameraSegment(startTime: 0, endTime: 1, ...)) with a freshly generatedid. This phantom segment's ID won't match anything in the model, so any edits to it will silently fail in the setter. While this is defensive, it hides the fact that the view is in an inconsistent state. A more explicit approach (e.g., logging a warning or usingassertionFailurein debug builds) would surface the issue earlier.Also applies to: 424-430, 450-456
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Screenize/Views/Inspector/InspectorView.swift` around lines 398 - 404, The getter currently returns a brand-new CameraSegment with a freshly generated id when the segment lookup fails (via timeline.tracks[trackIndex] and track.segments.firstIndex), which creates a phantom segment whose id won't match the model; change the failure path to (1) emit an assertionFailure or a logged warning in debug builds to surface the inconsistency and (2) avoid creating a segment with a new id — instead return a fallback CameraSegment constructed using the requested id (use CameraSegment(id: id, ...)) or change the getter to return an optional so callers must handle the missing segment; apply the same fix to the other similar getters referenced (the other CameraSegment binding getters).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Screenize/App/AppState.swift`:
- Around line 351-372: buildCaptureMeta currently hardcodes scaleFactor = 2.0
for all branches, which breaks non‑Retina or >2× displays; update
buildCaptureMeta to compute the scale factor dynamically (e.g., use the NSScreen
backingScaleFactor for the relevant screen) instead of the literal 2.0: for
.display use the matching display/NSScreen's backingScaleFactor (falling back to
NSScreen.main?.backingScaleFactor ?? 1.0), for .window use the window's
screen?.backingScaleFactor (or find the NSScreen containing window.frame), for
.region find the NSScreen that contains the rect and use its backingScaleFactor,
and for the extractVideoMetadata fallback derive a sensible scale
(NSScreen.main?.backingScaleFactor ?? 1.0) rather than hardcoding 2.0; update
all uses that construct CaptureMeta to pass this computed scaleFactor.
- Around line 374-383: createProject currently calls extractVideoMetadata then
calls buildCaptureMeta with the URL, causing extractVideoMetadata to run twice
and leaving a dead fallback; change buildCaptureMeta to accept precomputed
videoMetadata (or a video size/width/height struct) so createProject passes the
already-extracted videoMetadata into buildCaptureMeta and remove the unreachable
"?? CaptureMeta(...)" fallback in createProject; update the other call site (the
invocation in ScreenizeApp.swift at the former line ~306) to either pre-extract
video metadata before calling the new buildCaptureMeta signature or call a new
overload that accepts a URL and performs extraction, so both usages are covered
without redundant async I/O (symbols: createProject(packageInfo:),
buildCaptureMeta(...), extractVideoMetadata(...), and the ScreenizeApp.swift
caller).
In `@Screenize/Views/Inspector/InspectorView.swift`:
- Around line 235-266: The setters for the X/Y TextFields currently call
onSegmentChange?() explicitly after mutating segment.wrappedValue (which already
triggers cameraSegmentBinding's setter and calls onSegmentChange?()); remove the
explicit onSegmentChange?() calls inside those Binding set closures (the ones
that build a new TransformValue and assign it to segment.wrappedValue[keyPath:
keyPath]) so the change only fires once via the binding propagation; similarly
consider the alternative approach mentioned for CenterPointPicker: either keep
its onChange?() and remove propagation-based calls, or keep propagation and drop
CenterPointPicker's onChange?() to avoid double notifications (refer to symbols:
segment.wrappedValue, keyPath, TransformValue, onSegmentChange?(),
cameraSegmentBinding, CenterPointPicker, centerX/centerY, onChange?()).
- Around line 364-382: The timeRangeFields view currently writes raw values from
the TextFields into the start/end bindings allowing negative or start> end;
modify the binding setters used in timeRangeFields (the start and end
Binding<TimeInterval> closures) to validate and clamp values before assigning
(mirror EditorViewModel.updateSegmentTimeRange): clamp start to max(0,
min(duration, newStart)), then clamp end to min(duration, max(clampedStart +
0.001, newEnd)), and reject/ignore updates that would produce start >= end;
alternatively add guards at the top of updateSegment to perform the same
clamping/validation using startTime/endTime before committing the segment update
so invalid ranges are never applied.
---
Duplicate comments:
In `@Screenize/App/AppState.swift`:
- Around line 356-359: The scaleFactor calculation in CaptureMeta is wrong
because CGFloat(display.width) / CGFloat(display.width) is always 1; update the
computation in the CaptureMeta initializer (the scaleFactor argument) to use the
display's actual pixel-to-point ratio (e.g., use display.pixelWidth /
display.width or display.backingScaleFactor / display.nativeScale if those
properties exist) instead of width/width, and keep a sensible fallback (1.0 or
the prior 2.0) if the native scale property is unavailable; ensure the corrected
expression replaces the current CGFloat(display.width) / CGFloat(display.width)
* 2.0 so CaptureMeta receives the true device scale.
In `@Screenize/Views/Inspector/InspectorView.swift`:
- Around line 390-414: The binding setters (cameraSegmentBinding,
cursorSegmentBinding, keystrokeSegmentBinding) silently ignore rejected updates
when track.updateSegment(updated) returns false, causing the UI to remain
showing the attempted value; update each setter so that when updateSegment
returns false you force a SwiftUI refresh by reassigning the track back into
timeline.tracks (e.g., self.timeline.tracks[trackIndex] = .camera(track) or the
appropriate enum case) or by invoking a change notifier and still call
onSegmentChange?() so the view recomputes; ensure you reference timeline.tracks
and the specific enum cases (.camera, .cursor, .keystroke) and preserve the
existing successful-update path.
---
Nitpick comments:
In `@Screenize/Core/Recording/RecordingCoordinator.swift`:
- Around line 64-67: The startRecording(target:backgroundStyle:) method declares
an unused backgroundStyle parameter; remove backgroundStyle from the method
signature and all corresponding call sites to avoid misleading API surface, or
alternatively thread it into the recording pipeline by passing it into
RecordingSession, CaptureConfiguration, and/or ScreenCaptureManager where
capture setup occurs (look for RecordingSession initializer,
CaptureConfiguration creation, and ScreenCaptureManager.start/... methods) so
the value is actually consumed; preferred fix is to delete the unused
backgroundStyle parameter from startRecording and update every caller to call
startRecording(target:) and any related protocol/override signatures.
In `@Screenize/Views/Inspector/InspectorView.swift`:
- Around line 398-404: The getter currently returns a brand-new CameraSegment
with a freshly generated id when the segment lookup fails (via
timeline.tracks[trackIndex] and track.segments.firstIndex), which creates a
phantom segment whose id won't match the model; change the failure path to (1)
emit an assertionFailure or a logged warning in debug builds to surface the
inconsistency and (2) avoid creating a segment with a new id — instead return a
fallback CameraSegment constructed using the requested id (use CameraSegment(id:
id, ...)) or change the getter to return an optional so callers must handle the
missing segment; apply the same fix to the other similar getters referenced (the
other CameraSegment binding getters).
| func createProject(packageInfo: PackageInfo) async -> ScreenizeProject? { | ||
| // Extract video metadata | ||
| guard let videoMetadata = await extractVideoMetadata(from: packageInfo.videoURL) else { | ||
| return nil | ||
| } | ||
|
|
||
| let captureMeta = await buildCaptureMeta(videoURL: packageInfo.videoURL) ?? CaptureMeta( | ||
| boundsPt: CGRect(x: 0, y: 0, width: videoMetadata.width / 2, height: videoMetadata.height / 2), | ||
| scaleFactor: 2.0 | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n 'buildCaptureMeta' --type swift -C 3Repository: syi0808/screenize
Length of output: 1649
🏁 Script executed:
sed -n '352,370p' Screenize/App/AppState.swiftRepository: syi0808/screenize
Length of output: 1034
🏁 Script executed:
sed -n '295,315p' Screenize/ScreenizeApp.swiftRepository: syi0808/screenize
Length of output: 1016
🏁 Script executed:
sed -n '374,384p' Screenize/App/AppState.swiftRepository: syi0808/screenize
Length of output: 547
🏁 Script executed:
sed -n '280,320p' Screenize/ScreenizeApp.swiftRepository: syi0808/screenize
Length of output: 1672
extractVideoMetadata is called redundantly at line 380 in createProject.
createProject extracts video metadata at line 376, then calls buildCaptureMeta at line 380 which internally calls extractVideoMetadata again when selectedTarget is nil. Additionally, the ?? CaptureMeta(…) fallback on lines 380-383 is dead code: buildCaptureMeta can only return nil when its internal extractVideoMetadata fails, but createProject already early-returns on that same failure at line 376-378.
To refactor this: pass the already-loaded videoMetadata into buildCaptureMeta instead of the URL to avoid the redundant async I/O and eliminate the dead code. However, note that buildCaptureMeta is also called from ScreenizeApp.swift:306 where video metadata is not pre-extracted, so the refactoring strategy must handle both call sites—either by extracting video metadata at that call site first, or by providing an overload that accepts video size.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/App/AppState.swift` around lines 374 - 383, createProject currently
calls extractVideoMetadata then calls buildCaptureMeta with the URL, causing
extractVideoMetadata to run twice and leaving a dead fallback; change
buildCaptureMeta to accept precomputed videoMetadata (or a video
size/width/height struct) so createProject passes the already-extracted
videoMetadata into buildCaptureMeta and remove the unreachable "??
CaptureMeta(...)" fallback in createProject; update the other call site (the
invocation in ScreenizeApp.swift at the former line ~306) to either pre-extract
video metadata before calling the new buildCaptureMeta signature or call a new
overload that accepts a URL and performs extraction, so both usages are covered
without redundant async I/O (symbols: createProject(packageInfo:),
buildCaptureMeta(...), extractVideoMetadata(...), and the ScreenizeApp.swift
caller).
| TextField("", value: Binding( | ||
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.x) }, | ||
| set: { newX in | ||
| let current = segment.wrappedValue[keyPath: keyPath] | ||
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | ||
| zoom: current.zoom, | ||
| center: NormalizedPoint( | ||
| x: max(0, min(1, CGFloat(newX))), | ||
| y: current.center.y | ||
| ) | ||
| ) | ||
| onSegmentChange?() | ||
| } | ||
| ), format: .number.precision(.fractionLength(2))) | ||
| .textFieldStyle(.roundedBorder) | ||
| Text("Y") | ||
| .font(.system(size: 10)) | ||
| .foregroundColor(.secondary) | ||
| .frame(width: 12) | ||
| TextField("", value: Binding( | ||
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.y) }, | ||
| set: { newY in | ||
| let current = segment.wrappedValue[keyPath: keyPath] | ||
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | ||
| zoom: current.zoom, | ||
| center: NormalizedPoint( | ||
| x: current.center.x, | ||
| y: max(0, min(1, CGFloat(newY))) | ||
| ) | ||
| ) | ||
| onSegmentChange?() | ||
| } |
There was a problem hiding this comment.
onSegmentChange?() is called twice per edit — once explicitly, once via the binding setter.
The X/Y text field setters modify segment.wrappedValue (which triggers cameraSegmentBinding's setter → onSegmentChange?()) and then also call onSegmentChange?() explicitly on lines 246 and 265. The same double-fire happens in CenterPointPicker via its onChange callback (line 598) plus the binding propagation.
Remove the explicit onSegmentChange?() calls here, since the binding's setter already handles it.
Proposed fix
set: { newX in
let current = segment.wrappedValue[keyPath: keyPath]
segment.wrappedValue[keyPath: keyPath] = TransformValue(
zoom: current.zoom,
center: NormalizedPoint(
x: max(0, min(1, CGFloat(newX))),
y: current.center.y
)
)
- onSegmentChange?()
} set: { newY in
let current = segment.wrappedValue[keyPath: keyPath]
segment.wrappedValue[keyPath: keyPath] = TransformValue(
zoom: current.zoom,
center: NormalizedPoint(
x: current.center.x,
y: max(0, min(1, CGFloat(newY)))
)
)
- onSegmentChange?()
}Similarly, in CenterPointPicker, the onChange?() call on line 598 is redundant if the centerX/centerY bindings already propagate through the segment binding. However, if CenterPointPicker is intended to be reusable outside this context, keeping onChange there and removing it from the binding chain would be the cleaner option.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TextField("", value: Binding( | |
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.x) }, | |
| set: { newX in | |
| let current = segment.wrappedValue[keyPath: keyPath] | |
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | |
| zoom: current.zoom, | |
| center: NormalizedPoint( | |
| x: max(0, min(1, CGFloat(newX))), | |
| y: current.center.y | |
| ) | |
| ) | |
| onSegmentChange?() | |
| } | |
| ), format: .number.precision(.fractionLength(2))) | |
| .textFieldStyle(.roundedBorder) | |
| Text("Y") | |
| .font(.system(size: 10)) | |
| .foregroundColor(.secondary) | |
| .frame(width: 12) | |
| TextField("", value: Binding( | |
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.y) }, | |
| set: { newY in | |
| let current = segment.wrappedValue[keyPath: keyPath] | |
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | |
| zoom: current.zoom, | |
| center: NormalizedPoint( | |
| x: current.center.x, | |
| y: max(0, min(1, CGFloat(newY))) | |
| ) | |
| ) | |
| onSegmentChange?() | |
| } | |
| TextField("", value: Binding( | |
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.x) }, | |
| set: { newX in | |
| let current = segment.wrappedValue[keyPath: keyPath] | |
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | |
| zoom: current.zoom, | |
| center: NormalizedPoint( | |
| x: max(0, min(1, CGFloat(newX))), | |
| y: current.center.y | |
| ) | |
| ) | |
| } | |
| ), format: .number.precision(.fractionLength(2))) | |
| .textFieldStyle(.roundedBorder) | |
| Text("Y") | |
| .font(.system(size: 10)) | |
| .foregroundColor(.secondary) | |
| .frame(width: 12) | |
| TextField("", value: Binding( | |
| get: { Double(segment.wrappedValue[keyPath: keyPath].center.y) }, | |
| set: { newY in | |
| let current = segment.wrappedValue[keyPath: keyPath] | |
| segment.wrappedValue[keyPath: keyPath] = TransformValue( | |
| zoom: current.zoom, | |
| center: NormalizedPoint( | |
| x: current.center.x, | |
| y: max(0, min(1, CGFloat(newY))) | |
| ) | |
| ) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Views/Inspector/InspectorView.swift` around lines 235 - 266, The
setters for the X/Y TextFields currently call onSegmentChange?() explicitly
after mutating segment.wrappedValue (which already triggers
cameraSegmentBinding's setter and calls onSegmentChange?()); remove the explicit
onSegmentChange?() calls inside those Binding set closures (the ones that build
a new TransformValue and assign it to segment.wrappedValue[keyPath: keyPath]) so
the change only fires once via the binding propagation; similarly consider the
alternative approach mentioned for CenterPointPicker: either keep its
onChange?() and remove propagation-based calls, or keep propagation and drop
CenterPointPicker's onChange?() to avoid double notifications (refer to symbols:
segment.wrappedValue, keyPath, TransformValue, onSegmentChange?(),
cameraSegmentBinding, CenterPointPicker, centerX/centerY, onChange?()).
| private func timeRangeFields(start: Binding<TimeInterval>, end: Binding<TimeInterval>) -> some View { | ||
| HStack(spacing: 8) { | ||
| VStack(alignment: .leading, spacing: 4) { | ||
| Text("Start") | ||
| .font(.caption) | ||
| .foregroundStyle(.secondary) | ||
| TextField("0.0", value: start, format: .number.precision(.fractionLength(2))) | ||
| .textFieldStyle(.roundedBorder) | ||
| } | ||
|
|
||
| private func deleteButton(for keyframeID: UUID, trackType: TrackType) -> some View { | ||
| Button(role: .destructive) { | ||
| onDeleteKeyframe?(keyframeID, trackType) | ||
| } label: { | ||
| HStack { | ||
| Image(systemName: "trash") | ||
| Text("Delete Keyframe") | ||
| VStack(alignment: .leading, spacing: 4) { | ||
| Text("End") | ||
| .font(.caption) | ||
| .foregroundStyle(.secondary) | ||
| TextField("0.0", value: end, format: .number.precision(.fractionLength(2))) | ||
| .textFieldStyle(.roundedBorder) | ||
| } | ||
| .frame(maxWidth: .infinity) | ||
| } | ||
| .buttonStyle(.bordered) | ||
| .controlSize(.regular) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find updateSegment function implementation
ast-grep --pattern $'mutating func updateSegment($_) -> Bool {
$$$
}'Repository: syi0808/screenize
Length of output: 1769
🏁 Script executed:
# Also search for updateSegment more broadly if pattern doesn't match
rg "updateSegment" --type swift -A 10 -B 2Repository: syi0808/screenize
Length of output: 11434
🏁 Script executed:
# Find Track.swift to understand the segment validation
fd "Track.swift" --type fRepository: syi0808/screenize
Length of output: 91
🏁 Script executed:
# Check how the Inspector creates and uses the bindings for start/end times
rg "timeRangeFields" --type swift -B 5 -A 15Repository: syi0808/screenize
Length of output: 7422
🏁 Script executed:
# Find where start and end bindings are created in Inspector
rg "Binding.*TimeInterval" Screenize/Views/Inspector/InspectorView.swift -A 3Repository: syi0808/screenize
Length of output: 279
Add validation to prevent start > end and negative time values in the Inspector.
The timeRangeFields bindings directly set startTime and endTime without validation, allowing users to create invalid states (start > end, negative values). While updateSegment checks for segment overlap, it doesn't enforce the logical constraint that startTime < endTime or non-negative bounds. The EditorViewModel.updateSegmentTimeRange method demonstrates the correct approach: clamping start to max(0, min(duration, startTime)) and end to min(duration, max(clampedStart + 0.001, endTime)).
Apply similar validation in the binding setters when updating start/end times, or add guards in updateSegment to reject invalid time ranges before allowing the update.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Screenize/Views/Inspector/InspectorView.swift` around lines 364 - 382, The
timeRangeFields view currently writes raw values from the TextFields into the
start/end bindings allowing negative or start> end; modify the binding setters
used in timeRangeFields (the start and end Binding<TimeInterval> closures) to
validate and clamp values before assigning (mirror
EditorViewModel.updateSegmentTimeRange): clamp start to max(0, min(duration,
newStart)), then clamp end to min(duration, max(clampedStart + 0.001, newEnd)),
and reject/ignore updates that would produce start >= end; alternatively add
guards at the top of updateSegment to perform the same clamping/validation using
startTime/endTime before committing the segment update so invalid ranges are
never applied.
Define foundation types for the V2 smart generation pipeline: - UnifiedEvent, EventKind, EventMetadata for unified event timeline - UserIntent, TypingContext, DragContext, IntentSpan for intent classification - CameraScene, FocusRegion, FocusSource for scene segmentation - ShotType, ShotPlan for camera strategy planning - TransitionStyle, TransitionPlan for scene transition planning - SimulatedPath, TimedTransform for camera path simulation - SmartGeneratorV2 skeleton with GeneratedTimeline output type Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- SmartZoomGenerator: return CameraTrack instead of TransformTrack (keyframe-to-segment conversion moved inside generator) - ClickCursorGenerator: return CursorTrackV2 instead of CursorTrack, remove KeyframeGenerator conformance - KeystrokeGenerator: return KeystrokeTrackV2 instead of KeystrokeTrack, remove KeyframeGenerator conformance - EditorViewModel: update updateTimeline() to accept V2 types directly, delete convertToCameraTrack/convertToCursorTrack/convertToKeystrokeTrack Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Create ScreenizeTests unit test target in Xcode project with proper build configuration (TEST_HOST, BUNDLE_LOADER). Add stub implementations of EventTimeline and IntentClassifier in V2/Analysis/ directory. Write comprehensive tests: ~15 for EventTimeline (build, query, edge cases) and ~20 for IntentClassifier (typing, clicking, navigating, dragging, switching, idle detection). Tests correctly fail against stub implementations per TDD workflow. Step 2 of V2 pipeline migration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
EventTimeline.build() merges all MouseDataSource events into a unified, time-sorted stream: mouse moves (downsampled to ~10Hz), clicks (all types), keyboard events, drag start/end pairs, and UIStateSamples. Query methods use binary search for efficient range lookup. All 16 EventTimeline tests pass. Step 2 of V2 pipeline migration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Classifies EventTimeline into IntentSpans with typing, clicking, navigating, dragging, scrolling, switching, and idle detection. All 36 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move textInputRoles to UIElementInfo, add temporary adapter in SmartZoomGenerator that converts IntentSpans back to ActivityEvents for SessionClusterer compatibility, and delete ActivityCollector. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…enhance waypoint generation for clicks and typing
…ots and enhance navigating intent classification by increasing click distance.
…efining app switching detection, improving navigating focus, and adjusting camera anticipation based on waypoint urgency.
…ity reporting scripts.
…uce low-confidence camera adjustments in `ShotPlanner`.
…lanning, and add new transition templates with corresponding settings.
…erk-limited acceleration, and pre-arrival settling.
…terpolation modes, diagnostics logging, and text-critical motion blur suppression.
…ion presets and perceptual content complexity.
…nt and data processing, and adjust default rendering and temporal reconstruction settings.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
18-task plan covering ROI fixes, V2 curve-based motion generator, ContinuousCamera physics tuning, and validation metrics. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The 30% saliency blend in computeActivityCenter() was pulling the camera away from actual user activity due to insufficient saliency data quality. The nearestSaliencyCenter helper method is retained but no longer called from the hot path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Element frames are now scored by bounds validity, size reasonableness, and cursor overlap. Only frames with confidence >= 0.5 are used for zoom calculation, preventing garbage coordinates from influencing ROI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Recent events and action events (clicks, keystrokes) now have higher influence on the camera center, producing more accurate ROI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Validates all computed centers: bounds clamping, viewport overlap, and implausible-jump smoothing for adjacent scenes < 0.3s apart. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Two-pass approach: (1) unify near-identical adjacent zoom levels, (2) Gaussian envelope smoothing. Eliminates ping-pong zoom patterns. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pure math utility for smooth camera curves. Guarantees no overshoot between data points, critical for zoom and center coordinates. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Converts ShotPlans + TransitionPlans into waypoints with sub-waypoint injection for typing (caret tracking), dragging, navigating, and scrolling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Orchestrates waypoint generation + Hermite spline interpolation + 60Hz sampling to produce smooth continuous camera transforms.
Replace discrete simulation + post-processing + emission pipeline with continuous curve-based motion. V2 now outputs continuousTransforms, enabling smooth simultaneous zoom and pan.
FocusTyping preset uses tighter sub-waypoint intervals (0.25s / 0.03 distance) for responsive caret tracking. Cinematic preset uses wider spacing (0.5s / 0.06) for more deliberate camera movement. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…simulator Tapered stiffness (tanh soft-cap at 0.15 radius) prevents explosive acceleration from large displacements. Velocity-dependent damping (factor 0.5) reduces overshoot at high speeds. Both improve ContinuousCamera motion smoothness. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Removes the discontinuity at app-switch boundaries. The urgency multiplier system (0.05x response) produces equivalently fast motion with a continuous acceleration profile instead of a jarring position jump. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
maxPositionJerk 8→4, maxZoomJerk 10→5 for smoother acceleration. Urgency multipliers tuned: immediate 0.05→0.15 (works with removed hard snap), high 0.5→0.6 (less aggressive), lazy 2.0→1.5 (more responsive). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Temporal coalescing window 0.0001→0.05s to merge near-simultaneous waypoints. 3-sample moving average on caret positions for typing detail waypoints. Redundant waypoint elimination within 0.02 center distance or 0.05 zoom distance. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Only blends for .high/.normal urgency within 0.3s window. Max 30% blend factor via smoothstep. Creates natural anticipation before camera movements without premature panning on app switches or idle. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
C1/C2 discontinuity measurement (p95), zoom direction reversals per minute, event-in-viewport coverage percentage, and segment overlap detection for quality validation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Related Issues
Type of Change
Checklist
Cmd+B)./scripts/lint.sh)Screenshots / Screen Recordings
Additional Context
Summary by CodeRabbit
New Features
Removals
Improvements
Documentation