feat: add log viewer/inspector to admin UI#183
feat: add log viewer/inspector to admin UI#183staging-devin-ai-integration[bot] wants to merge 5 commits intomainfrom
Conversation
Add a new log viewer in the admin section with: - Backend API: GET /api/v1/logs for paginated log reading with byte-offset paging, direction (forward/backward), and level+text filtering. GET /api/v1/logs/stream SSE endpoint for live tail. - Frontend: LogsView component with filter bar (text search + level dropdown), pagination (Older/Newer/Latest), and live tail toggle with auto-scroll. - Admin-only access via access_all_sessions permission. - E2E tests covering navigation, filtering, pagination, and API responses. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| if seek_to > 0 { | ||
| if let Some(partial) = lines_iter.next_line().await.map_err(|e| { | ||
| warn!("Failed to read log line: {e}"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })? { | ||
| bytes_read += partial.len() as u64 + 1; // +1 for newline | ||
| } | ||
| } |
There was a problem hiding this comment.
🔴 Forward pagination silently drops one log line at every page boundary
In read_forward, when seek_to > 0, the code unconditionally skips the first line as a "partial line" (apps/skit/src/log_viewer.rs:232-238). However, the next_offset returned by a previous forward read (apps/skit/src/log_viewer.rs:257) points to the exact start of the next unread line (it's seek_to + bytes_read where bytes_read accounts for all content including newlines). This means on continuation reads, the skipped line is actually a complete line, not a partial one.
Concrete trace showing the dropped line
Given file content "line1\nline2\nline3\nline4\n" (24 bytes) and limit=2:
Page 1 (offset=0): seek_to=0, no skip → reads ["line1", "line2"], bytes_read=12, next_offset=12
Page 2 (offset=12): seek_to=12 (byte 12 is start of "line3"), skip first line → skips "line3" as partial, reads ["line4"], next_offset=24
"line3" is silently dropped from the output.
Prompt for agents
In apps/skit/src/log_viewer.rs, the read_forward function (lines 231-239) unconditionally skips the first line when seek_to > 0, treating it as a partial line. However, next_offset (line 257) is calculated as seek_to + bytes_read, which points to the exact start of the next complete line. This means on continuation reads the first line is a complete line that gets incorrectly skipped.
Fix: Adjust next_offset at line 257 to point into the last-read line (e.g. at its final newline) rather than after it, so the next read's partial-line skip discards the already-consumed tail. One approach: change line 257 to `let next_offset = seek_to + bytes_read.saturating_sub(1);` — this positions the cursor on the trailing newline of the last read line, so the subsequent partial-line skip returns an empty string and the next complete line is correctly included.
Alternatively, refactor the skip logic to check whether the byte immediately before the offset is a newline (indicating the offset is at a line boundary) and skip only when it is not.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
apps/skit/src/log_viewer.rs
Outdated
| let mut chunk_lines: Vec<&str> = text.split('\n').collect(); | ||
|
|
||
| // If we're not at the start of the file, the first element is a partial line — drop it | ||
| if chunk_start > 0 && !chunk_lines.is_empty() { | ||
| chunk_lines.remove(0); | ||
| } | ||
|
|
||
| // Remove trailing empty element from split | ||
| if chunk_lines.last().is_some_and(|l| l.is_empty()) { | ||
| chunk_lines.pop(); | ||
| } | ||
|
|
||
| // Filter and prepend (we're going backward, so prepend to maintain order) | ||
| let filtered: Vec<String> = chunk_lines | ||
| .into_iter() | ||
| .filter(|line| !line.is_empty() && line_passes_filters(line, level, filter)) | ||
| .map(String::from) | ||
| .collect(); | ||
|
|
||
| // Take from the end of filtered to fill our remaining capacity | ||
| let take_count = remaining_lines.min(filtered.len()); | ||
| let start_idx = filtered.len().saturating_sub(take_count); | ||
| let mut new_lines: Vec<String> = filtered[start_idx..].to_vec(); | ||
| new_lines.append(&mut collected); | ||
| collected = new_lines; | ||
|
|
||
| current_end = chunk_start; |
There was a problem hiding this comment.
📝 Info: Backward read algorithm is correct but has a theoretical edge case with lines >32KB
The read_backward function uses a carry buffer to join partial lines across chunk boundaries. I traced through multiple scenarios (line splits at chunk boundaries, single-element chunks, chunk_start=0 with carry) and confirmed the algorithm is correct for all practical cases. However, there's a theoretical edge case: if a single log line exceeds 32KB (the minimum chunk size at line 292), the chunk contains no newlines, and chunk_start > 0, the code would emit completed_tail (the entire chunk + old carry) as a finished line rather than carrying the whole fragment forward. In practice this is nearly impossible since log lines are typically <1KB, but worth documenting.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
| <Button | ||
| onClick={lv.handleLoadNewer} | ||
| disabled={lv.isLoading || !lv.hasMore || lv.liveTail} | ||
| variant="ghost" | ||
| data-testid="logs-load-newer" | ||
| > |
There was a problem hiding this comment.
🚩 hasMore semantics are direction-dependent but UI treats them uniformly
After a backward read, has_more means "there are older logs before the current position" (apps/skit/src/log_viewer.rs:343). The "Newer" button is disabled when !lv.hasMore (ui/src/views/LogsView.tsx:368), which means after the initial backward load from the end of the file, "Newer" is enabled (because there are older logs) even though there are no newer logs to load. Similarly, if the user were to navigate to the very start of the file, hasMore would be false, disabling "Newer" even though all the newer logs exist. This is a UX inconsistency stemming from using a single hasMore state for both pagination directions. Not flagged as a bug since the buttons don't crash and the feature is new, but the pagination model may benefit from tracking separate forward/backward cursors.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
- Fix backward multi-chunk reading dropping lines at chunk boundaries by using a carry buffer to join partial lines across chunks - Fix handleLoadOlder using wrong offset (currentOffset was undefined after initial backward load; now uses nextOffset) - Remove unused currentOffset state variable - Fix E2E tests: use heading role for ambiguous 'Logs' text selector, remove getAuthHeaders() (cookies from ensureLoggedIn suffice), skip API tests gracefully when file logging is disabled (CI), fix admin nav test to avoid plugins page (no testid in appViews) - Add unit test for backward multi-chunk reading to prevent regression Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| while lines.len() < limit { | ||
| match lines_iter.next_line().await { | ||
| Ok(Some(line)) => { | ||
| bytes_read += line.len() as u64 + 1; | ||
| if line_passes_filters(&line, level, filter) { | ||
| lines.push(line); | ||
| } | ||
| }, | ||
| Ok(None) => break, | ||
| Err(e) => { | ||
| warn!("Error reading log line: {e}"); | ||
| break; | ||
| }, | ||
| } | ||
| } |
There was a problem hiding this comment.
🚩 Forward read with active filters can scan the entire file unboundedly
In read_forward (line 241), the loop while lines.len() < limit reads lines until enough matching lines are collected OR EOF is reached. With a restrictive filter (e.g., level=error on a file with few errors), this could scan the entire multi-GB log file in a single request, holding the file handle and CPU. There's no byte-budget or timeout to bound the scan. For large log files this could cause significant latency or resource usage. A safeguard like a maximum bytes-scanned limit would prevent this.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
There was a problem hiding this comment.
Good catch — the unbounded scan is a valid concern for very large log files with restrictive filters. A maximum bytes-scanned budget would be a sensible safeguard. I'll leave this as a follow-up improvement since it doesn't affect correctness (only latency for pathological cases), but happy to add it in this PR if preferred.
Login flow redirects to /design, not back to the original page. Follow the same pattern as monitor.spec.ts: check URL after login and re-navigate if needed. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| current_end = chunk_start; | ||
| } | ||
|
|
||
| // Truncate to limit if we over-collected | ||
| if collected.len() > limit { | ||
| let excess = collected.len() - limit; | ||
| collected = collected[excess..].to_vec(); | ||
| } | ||
|
|
||
| Ok(LogResponse { | ||
| lines: collected, | ||
| next_offset: current_end, | ||
| has_more: current_end > 0, | ||
| file_size, | ||
| }) |
There was a problem hiding this comment.
🟡 read_backward drops carry buffer between paginated requests, truncating a line at each page boundary
When read_backward exits its while loop because collected.len() >= limit (enough lines found), the carry string — which holds the tail fragment of a line straddling the chunk boundary at current_end — is silently discarded. The response returns next_offset = current_end.
On the next paginated request with offset = current_end, a new read_backward call starts with an empty carry. It reads a chunk ending at current_end, and the last element popped from text.split('\n') is the head of the boundary line (from the last \n before current_end to current_end). Since carry is empty, completed_tail only contains this head — the tail portion (stored in the previous request's carry) is permanently lost. This causes the boundary line to appear truncated in the second response, and it was entirely absent from the first response.
Concrete example with a large log file
Suppose the file has lines at regular intervals and chunk_start falls at byte 67232 mid-line (between \n at byte 67200 and \n at byte 67300):
- Request 1:
carry = text[67232..67300](68-char tail of the line). This carry is NOT included incollected.next_offset = 67232. - Request 2 (
offset=67232): reads backward from 67232. The last popped element istext[67200..67232](32-char head).completed_tail = headonly (32 chars instead of the full 100-char line).
The line is truncated to 32 characters.
Prompt for agents
In apps/skit/src/log_viewer.rs, the read_backward function (line 267-358) needs to account for the carry buffer when computing next_offset. When the loop exits with a non-empty carry, next_offset should be advanced past the carry's bytes so that the next paginated request doesn't re-read the partial line head without its tail.
One approach: after the while loop (line 344), if carry is non-empty, add carry.len()+1 (for the newline) to current_end before using it as next_offset. This ensures the next request starts at the newline boundary, not mid-line.
Alternatively, redesign next_offset to always point to a line boundary. After the loop, scan forward from current_end to find the first newline, and set next_offset to that position + 1. This ensures the next backward read ends at a clean line boundary.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
There was a problem hiding this comment.
Good catch — the carry buffer is indeed lost between paginated requests. When the loop exits early (enough lines collected) with a non-empty carry, current_end is set to the chunk boundary rather than the line boundary. The next request then starts at that chunk boundary and reconstructs only the head portion of the straddling line, losing the tail that was in carry.
Fix approach: after the while loop, if carry is non-empty, adjust current_end forward past the carry bytes so the next request's backward read ends at the newline boundary rather than mid-line. This ensures the straddling line appears complete in exactly one response.
| const handleLoadNewer = useCallback(() => { | ||
| loadLogs('forward', nextOffset); | ||
| }, [loadLogs, nextOffset]); | ||
|
|
||
| const handleLoadOlder = useCallback(() => { | ||
| if (nextOffset > 0) { | ||
| loadLogs('backward', nextOffset); | ||
| } | ||
| }, [loadLogs, nextOffset]); |
There was a problem hiding this comment.
🟡 Single nextOffset/hasMore state used for both forward and backward pagination causes incorrect button states and duplicate results
The useLogViewer hook stores a single nextOffset and hasMore pair from the last API response, but their semantics differ by direction. After a backward read, has_more means "more older lines exist" and next_offset is the backward continuation cursor. After a forward read, has_more means "more newer lines exist" and next_offset is the forward continuation cursor.
The "Newer" button is disabled when !hasMore (LogsView.tsx:364). After the initial backward load of the latest logs, hasMore is true (indicating more older lines), so the "Newer" button is incorrectly enabled even though there are no newer lines. Clicking it calls loadLogs('forward', nextOffset) with the backward cursor, which re-displays approximately the same lines. Similarly, handleLoadOlder (LogsView.tsx:200-204) uses the same nextOffset, so after a forward read, clicking "Older" re-displays the same data instead of showing earlier lines.
Prompt for agents
In ui/src/views/LogsView.tsx, the useLogViewer hook (starting at line 73) needs to track the current reading direction alongside the offset state. Options:
1. Track separate offsets for forward and backward: e.g. backwardOffset (for "Older") and forwardOffset (for "Newer"). After a backward load, set backwardOffset from the response's next_offset, and compute forwardOffset from the original requested offset (or file_size). After a forward load, do the reverse.
2. Track the last direction used and the window of byte offsets currently displayed (startOffset, endOffset). Use startOffset for backward pagination and endOffset for forward pagination.
3. Disable the "Newer" button based on whether the current view already shows the latest lines (e.g., track a boolean like isAtLatest that is set to true after a backward load with no offset, and false after paging older).
The hasMore flag in the disabled condition for "Newer" (line 364) should reflect forward availability, not backward availability.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
There was a problem hiding this comment.
Valid point. The single nextOffset/hasMore state conflates forward and backward cursors, leading to incorrect button states after direction changes. For example, after the initial backward load showing the latest logs, hasMore=true (older logs exist) incorrectly enables "Newer" even though there are no newer lines.
Fix approach: track the byte window of the currently displayed page (pageStart, pageEnd) and derive button states from those positions relative to file_size and 0. "Older" is enabled when pageStart > 0, "Newer" is enabled when pageEnd < file_size. Each direction uses the appropriate edge of the window as its offset.
| #[tokio::test] | ||
| async fn test_read_backward_multi_chunk_preserves_lines() { | ||
| use tokio::io::AsyncWriteExt; | ||
|
|
||
| let dir = tempfile::tempdir().unwrap(); | ||
| let path = dir.path().join("test.log"); | ||
|
|
||
| // Write enough lines so that a small forced chunk triggers multi-chunk reading. | ||
| let mut f = tokio::fs::File::create(&path).await.unwrap(); | ||
| let mut expected: Vec<String> = Vec::new(); | ||
| for i in 0..20 { | ||
| let line = format!("2024-01-01T00:00:00Z INFO test: line number {i}"); | ||
| f.write_all(line.as_bytes()).await.unwrap(); | ||
| f.write_all(b"\n").await.unwrap(); | ||
| expected.push(line); | ||
| } | ||
| f.flush().await.unwrap(); | ||
| drop(f); | ||
|
|
||
| let file = tokio::fs::File::open(&path).await.unwrap(); | ||
| let file_size = file.metadata().await.unwrap().len(); | ||
|
|
||
| let resp = read_backward(file, file_size, None, 100, None, None).await.unwrap(); | ||
|
|
||
| assert_eq!(resp.lines, expected, "all lines should be returned in order"); | ||
| assert!(!resp.has_more); | ||
| } |
There was a problem hiding this comment.
🚩 Backward reading carry logic is correct within a single request, but the test doesn't exercise multi-chunk behavior
The test test_read_backward_multi_chunk_preserves_lines at apps/skit/src/log_viewer.rs:531-556 writes 20 lines (~980 bytes total). However, the minimum chunk size is max(remaining_lines * 8192, 32768) = at least 32KB (line 292). Since the entire test file is under 1KB, the file is always read in a single chunk, so the multi-chunk carry logic is never exercised. The test name is misleading. The carry bug (BUG-0001) would only be caught by a test using a file larger than the chunk size, or by mocking the chunk size to be smaller.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
Adds a log viewer/inspector to the admin UI section with:
Backend (
apps/skit/src/log_viewer.rs):GET /api/v1/logs— paginated log retrieval with byte-offset paging, forward/backward direction, level + text filteringGET /api/v1/logs/stream— SSE endpoint for live tail, polls file for new data every 500msaccess_all_sessionspermissionFrontend (
ui/src/views/LogsView.tsx,LogsView.styles.ts,ui/src/services/logs.ts):useLogViewerhook to keep component under max-statements lint ruleRouting (
App.tsx,AdminNav.tsx):/admin/logsroute + "Logs" link in admin navigationE2E Tests (
e2e/tests/logs.spec.ts):SK_LOG__FILE_ENABLE=false)Review & Testing Checklist for Human
Notes
test_read_backward_multi_chunk_preserves_lines) validates this.test.skipwhen the server has file logging disabled (as in CI), so they only run against servers withfile_enable = true.Link to Devin session: https://staging.itsdev.in/sessions/1d500de08b964b1fa0db6abeafe541f4
Requested by: @streamer45