Skip to content

feat: add log viewer/inspector to admin UI#183

Open
staging-devin-ai-integration[bot] wants to merge 5 commits intomainfrom
devin/1774212545-log-viewer
Open

feat: add log viewer/inspector to admin UI#183
staging-devin-ai-integration[bot] wants to merge 5 commits intomainfrom
devin/1774212545-log-viewer

Conversation

@staging-devin-ai-integration
Copy link
Contributor

@staging-devin-ai-integration staging-devin-ai-integration bot commented Mar 22, 2026

Summary

Adds a log viewer/inspector to the admin UI section with:

Backend (apps/skit/src/log_viewer.rs):

  • GET /api/v1/logs — paginated log retrieval with byte-offset paging, forward/backward direction, level + text filtering
  • GET /api/v1/logs/stream — SSE endpoint for live tail, polls file for new data every 500ms
  • Admin-only access via access_all_sessions permission
  • Carry-buffer algorithm for backward multi-chunk reading that correctly joins partial lines at chunk boundaries

Frontend (ui/src/views/LogsView.tsx, LogsView.styles.ts, ui/src/services/logs.ts):

  • Paginated display with "Older" / "Newer" / "Latest" navigation
  • Filter bar: text search input + log level dropdown (all/error/warn/info/debug)
  • Live tail toggle with SSE streaming and auto-scroll
  • Monospace log display with level-based color highlighting (red for errors, amber for warnings, muted for debug/trace)
  • Custom useLogViewer hook to keep component under max-statements lint rule

Routing (App.tsx, AdminNav.tsx):

  • /admin/logs route + "Logs" link in admin navigation

E2E Tests (e2e/tests/logs.spec.ts):

  • UI presence tests (view, filter controls, pagination buttons)
  • Filter interaction tests (level select, text filter)
  • API tests with graceful skip when file logging is disabled (CI runs with SK_LOG__FILE_ENABLE=false)
  • Admin nav visibility test

Review & Testing Checklist for Human

  • Navigate to Admin > Logs — verify log lines load, color coding displays correctly for different levels (error=red, warn=amber, debug=muted)
  • Test "Older" pagination — click Older multiple times to page backward through the log file; verify lines don't repeat or get lost
  • Test live tail — click "Live tail", trigger some server activity, verify new log lines appear and auto-scroll works; click "Stop tail" to pause
  • Test filtering — enter a text filter (e.g. "server") and select a level (e.g. "Error"), click Filter; verify only matching lines appear
  • Verify admin-only access — as a non-admin role, navigate to /admin/logs and confirm the "Admin role required" message appears

Notes

  • The backward read uses a carry-buffer algorithm to correctly handle lines that span chunk boundaries during multi-chunk reads. A unit test (test_read_backward_multi_chunk_preserves_lines) validates this.
  • Devin Review flagged that forward read with restrictive filters could scan an entire large file unboundedly. This is a valid concern for multi-GB files but doesn't affect correctness — a max-bytes-scanned budget could be added as a follow-up.
  • E2E API tests gracefully skip with test.skip when the server has file logging disabled (as in CI), so they only run against servers with file_enable = true.

Link to Devin session: https://staging.itsdev.in/sessions/1d500de08b964b1fa0db6abeafe541f4
Requested by: @streamer45


Staging: Open in Devin

Add a new log viewer in the admin section with:
- Backend API: GET /api/v1/logs for paginated log reading with
  byte-offset paging, direction (forward/backward), and level+text
  filtering. GET /api/v1/logs/stream SSE endpoint for live tail.
- Frontend: LogsView component with filter bar (text search + level
  dropdown), pagination (Older/Newer/Latest), and live tail toggle
  with auto-scroll.
- Admin-only access via access_all_sessions permission.
- E2E tests covering navigation, filtering, pagination, and API
  responses.

Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
@staging-devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Copy link
Contributor Author

@staging-devin-ai-integration staging-devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 4 potential issues.

View 2 additional findings in Devin Review.

Staging: Open in Devin
Debug

Playground

Comment on lines +232 to +239
if seek_to > 0 {
if let Some(partial) = lines_iter.next_line().await.map_err(|e| {
warn!("Failed to read log line: {e}");
StatusCode::INTERNAL_SERVER_ERROR
})? {
bytes_read += partial.len() as u64 + 1; // +1 for newline
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Forward pagination silently drops one log line at every page boundary

In read_forward, when seek_to > 0, the code unconditionally skips the first line as a "partial line" (apps/skit/src/log_viewer.rs:232-238). However, the next_offset returned by a previous forward read (apps/skit/src/log_viewer.rs:257) points to the exact start of the next unread line (it's seek_to + bytes_read where bytes_read accounts for all content including newlines). This means on continuation reads, the skipped line is actually a complete line, not a partial one.

Concrete trace showing the dropped line

Given file content "line1\nline2\nline3\nline4\n" (24 bytes) and limit=2:

Page 1 (offset=0): seek_to=0, no skip → reads ["line1", "line2"], bytes_read=12, next_offset=12

Page 2 (offset=12): seek_to=12 (byte 12 is start of "line3"), skip first line → skips "line3" as partial, reads ["line4"], next_offset=24

"line3" is silently dropped from the output.

Prompt for agents
In apps/skit/src/log_viewer.rs, the read_forward function (lines 231-239) unconditionally skips the first line when seek_to > 0, treating it as a partial line. However, next_offset (line 257) is calculated as seek_to + bytes_read, which points to the exact start of the next complete line. This means on continuation reads the first line is a complete line that gets incorrectly skipped.

Fix: Adjust next_offset at line 257 to point into the last-read line (e.g. at its final newline) rather than after it, so the next read's partial-line skip discards the already-consumed tail. One approach: change line 257 to `let next_offset = seek_to + bytes_read.saturating_sub(1);` — this positions the cursor on the trailing newline of the last read line, so the subsequent partial-line skip returns an empty string and the next complete line is correctly included.

Alternatively, refactor the skip logic to check whether the byte immediately before the offset is a newline (indicating the offset is at a line boundary) and skip only when it is not.
Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Comment on lines +305 to +331
let mut chunk_lines: Vec<&str> = text.split('\n').collect();

// If we're not at the start of the file, the first element is a partial line — drop it
if chunk_start > 0 && !chunk_lines.is_empty() {
chunk_lines.remove(0);
}

// Remove trailing empty element from split
if chunk_lines.last().is_some_and(|l| l.is_empty()) {
chunk_lines.pop();
}

// Filter and prepend (we're going backward, so prepend to maintain order)
let filtered: Vec<String> = chunk_lines
.into_iter()
.filter(|line| !line.is_empty() && line_passes_filters(line, level, filter))
.map(String::from)
.collect();

// Take from the end of filtered to fill our remaining capacity
let take_count = remaining_lines.min(filtered.len());
let start_idx = filtered.len().saturating_sub(take_count);
let mut new_lines: Vec<String> = filtered[start_idx..].to_vec();
new_lines.append(&mut collected);
collected = new_lines;

current_end = chunk_start;
Copy link
Contributor Author

@staging-devin-ai-integration staging-devin-ai-integration bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 Info: Backward read algorithm is correct but has a theoretical edge case with lines >32KB

The read_backward function uses a carry buffer to join partial lines across chunk boundaries. I traced through multiple scenarios (line splits at chunk boundaries, single-element chunks, chunk_start=0 with carry) and confirmed the algorithm is correct for all practical cases. However, there's a theoretical edge case: if a single log line exceeds 32KB (the minimum chunk size at line 292), the chunk contains no newlines, and chunk_start > 0, the code would emit completed_tail (the entire chunk + old carry) as a finished line rather than carrying the whole fragment forward. In practice this is nearly impossible since log lines are typically <1KB, but worth documenting.

Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Comment on lines +366 to +371
<Button
onClick={lv.handleLoadNewer}
disabled={lv.isLoading || !lv.hasMore || lv.liveTail}
variant="ghost"
data-testid="logs-load-newer"
>
Copy link
Contributor Author

@staging-devin-ai-integration staging-devin-ai-integration bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 hasMore semantics are direction-dependent but UI treats them uniformly

After a backward read, has_more means "there are older logs before the current position" (apps/skit/src/log_viewer.rs:343). The "Newer" button is disabled when !lv.hasMore (ui/src/views/LogsView.tsx:368), which means after the initial backward load from the end of the file, "Newer" is enabled (because there are older logs) even though there are no newer logs to load. Similarly, if the user were to navigate to the very start of the file, hasMore would be false, disabling "Newer" even though all the newer logs exist. This is a UX inconsistency stemming from using a single hasMore state for both pagination directions. Not flagged as a bug since the buttons don't crash and the feature is new, but the pagination model may benefit from tracking separate forward/backward cursors.

Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

streamkit-devin and others added 2 commits March 22, 2026 21:20
- Fix backward multi-chunk reading dropping lines at chunk boundaries
  by using a carry buffer to join partial lines across chunks
- Fix handleLoadOlder using wrong offset (currentOffset was undefined
  after initial backward load; now uses nextOffset)
- Remove unused currentOffset state variable
- Fix E2E tests: use heading role for ambiguous 'Logs' text selector,
  remove getAuthHeaders() (cookies from ensureLoggedIn suffice),
  skip API tests gracefully when file logging is disabled (CI),
  fix admin nav test to avoid plugins page (no testid in appViews)
- Add unit test for backward multi-chunk reading to prevent regression

Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Copy link
Contributor Author

@staging-devin-ai-integration staging-devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 5 additional findings in Devin Review.

Staging: Open in Devin
Debug

Playground

Comment on lines +241 to +255
while lines.len() < limit {
match lines_iter.next_line().await {
Ok(Some(line)) => {
bytes_read += line.len() as u64 + 1;
if line_passes_filters(&line, level, filter) {
lines.push(line);
}
},
Ok(None) => break,
Err(e) => {
warn!("Error reading log line: {e}");
break;
},
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Forward read with active filters can scan the entire file unboundedly

In read_forward (line 241), the loop while lines.len() < limit reads lines until enough matching lines are collected OR EOF is reached. With a restrictive filter (e.g., level=error on a file with few errors), this could scan the entire multi-GB log file in a single request, holding the file handle and CPU. There's no byte-budget or timeout to bound the scan. For large log files this could cause significant latency or resource usage. A safeguard like a maximum bytes-scanned limit would prevent this.

Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — the unbounded scan is a valid concern for very large log files with restrictive filters. A maximum bytes-scanned budget would be a sensible safeguard. I'll leave this as a follow-up improvement since it doesn't affect correctness (only latency for pathological cases), but happy to add it in this PR if preferred.

Login flow redirects to /design, not back to the original page.
Follow the same pattern as monitor.spec.ts: check URL after login
and re-navigate if needed.

Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Copy link
Contributor Author

@staging-devin-ai-integration staging-devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 3 new potential issues.

View 8 additional findings in Devin Review.

Staging: Open in Devin
Debug

Playground

Comment on lines +343 to +357
current_end = chunk_start;
}

// Truncate to limit if we over-collected
if collected.len() > limit {
let excess = collected.len() - limit;
collected = collected[excess..].to_vec();
}

Ok(LogResponse {
lines: collected,
next_offset: current_end,
has_more: current_end > 0,
file_size,
})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 read_backward drops carry buffer between paginated requests, truncating a line at each page boundary

When read_backward exits its while loop because collected.len() >= limit (enough lines found), the carry string — which holds the tail fragment of a line straddling the chunk boundary at current_end — is silently discarded. The response returns next_offset = current_end.

On the next paginated request with offset = current_end, a new read_backward call starts with an empty carry. It reads a chunk ending at current_end, and the last element popped from text.split('\n') is the head of the boundary line (from the last \n before current_end to current_end). Since carry is empty, completed_tail only contains this head — the tail portion (stored in the previous request's carry) is permanently lost. This causes the boundary line to appear truncated in the second response, and it was entirely absent from the first response.

Concrete example with a large log file

Suppose the file has lines at regular intervals and chunk_start falls at byte 67232 mid-line (between \n at byte 67200 and \n at byte 67300):

  • Request 1: carry = text[67232..67300] (68-char tail of the line). This carry is NOT included in collected. next_offset = 67232.
  • Request 2 (offset=67232): reads backward from 67232. The last popped element is text[67200..67232] (32-char head). completed_tail = head only (32 chars instead of the full 100-char line).

The line is truncated to 32 characters.

Prompt for agents
In apps/skit/src/log_viewer.rs, the read_backward function (line 267-358) needs to account for the carry buffer when computing next_offset. When the loop exits with a non-empty carry, next_offset should be advanced past the carry's bytes so that the next paginated request doesn't re-read the partial line head without its tail.

One approach: after the while loop (line 344), if carry is non-empty, add carry.len()+1 (for the newline) to current_end before using it as next_offset. This ensures the next request starts at the newline boundary, not mid-line.

Alternatively, redesign next_offset to always point to a line boundary. After the loop, scan forward from current_end to find the first newline, and set next_offset to that position + 1. This ensures the next backward read ends at a clean line boundary.
Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — the carry buffer is indeed lost between paginated requests. When the loop exits early (enough lines collected) with a non-empty carry, current_end is set to the chunk boundary rather than the line boundary. The next request then starts at that chunk boundary and reconstructs only the head portion of the straddling line, losing the tail that was in carry.

Fix approach: after the while loop, if carry is non-empty, adjust current_end forward past the carry bytes so the next request's backward read ends at the newline boundary rather than mid-line. This ensures the straddling line appears complete in exactly one response.

Comment on lines +196 to +204
const handleLoadNewer = useCallback(() => {
loadLogs('forward', nextOffset);
}, [loadLogs, nextOffset]);

const handleLoadOlder = useCallback(() => {
if (nextOffset > 0) {
loadLogs('backward', nextOffset);
}
}, [loadLogs, nextOffset]);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Single nextOffset/hasMore state used for both forward and backward pagination causes incorrect button states and duplicate results

The useLogViewer hook stores a single nextOffset and hasMore pair from the last API response, but their semantics differ by direction. After a backward read, has_more means "more older lines exist" and next_offset is the backward continuation cursor. After a forward read, has_more means "more newer lines exist" and next_offset is the forward continuation cursor.

The "Newer" button is disabled when !hasMore (LogsView.tsx:364). After the initial backward load of the latest logs, hasMore is true (indicating more older lines), so the "Newer" button is incorrectly enabled even though there are no newer lines. Clicking it calls loadLogs('forward', nextOffset) with the backward cursor, which re-displays approximately the same lines. Similarly, handleLoadOlder (LogsView.tsx:200-204) uses the same nextOffset, so after a forward read, clicking "Older" re-displays the same data instead of showing earlier lines.

Prompt for agents
In ui/src/views/LogsView.tsx, the useLogViewer hook (starting at line 73) needs to track the current reading direction alongside the offset state. Options:

1. Track separate offsets for forward and backward: e.g. backwardOffset (for "Older") and forwardOffset (for "Newer"). After a backward load, set backwardOffset from the response's next_offset, and compute forwardOffset from the original requested offset (or file_size). After a forward load, do the reverse.

2. Track the last direction used and the window of byte offsets currently displayed (startOffset, endOffset). Use startOffset for backward pagination and endOffset for forward pagination.

3. Disable the "Newer" button based on whether the current view already shows the latest lines (e.g., track a boolean like isAtLatest that is set to true after a backward load with no offset, and false after paging older).

The hasMore flag in the disabled condition for "Newer" (line 364) should reflect forward availability, not backward availability.
Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid point. The single nextOffset/hasMore state conflates forward and backward cursors, leading to incorrect button states after direction changes. For example, after the initial backward load showing the latest logs, hasMore=true (older logs exist) incorrectly enables "Newer" even though there are no newer lines.

Fix approach: track the byte window of the currently displayed page (pageStart, pageEnd) and derive button states from those positions relative to file_size and 0. "Older" is enabled when pageStart > 0, "Newer" is enabled when pageEnd < file_size. Each direction uses the appropriate edge of the window as its offset.

Comment on lines +530 to +556
#[tokio::test]
async fn test_read_backward_multi_chunk_preserves_lines() {
use tokio::io::AsyncWriteExt;

let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("test.log");

// Write enough lines so that a small forced chunk triggers multi-chunk reading.
let mut f = tokio::fs::File::create(&path).await.unwrap();
let mut expected: Vec<String> = Vec::new();
for i in 0..20 {
let line = format!("2024-01-01T00:00:00Z INFO test: line number {i}");
f.write_all(line.as_bytes()).await.unwrap();
f.write_all(b"\n").await.unwrap();
expected.push(line);
}
f.flush().await.unwrap();
drop(f);

let file = tokio::fs::File::open(&path).await.unwrap();
let file_size = file.metadata().await.unwrap().len();

let resp = read_backward(file, file_size, None, 100, None, None).await.unwrap();

assert_eq!(resp.lines, expected, "all lines should be returned in order");
assert!(!resp.has_more);
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Backward reading carry logic is correct within a single request, but the test doesn't exercise multi-chunk behavior

The test test_read_backward_multi_chunk_preserves_lines at apps/skit/src/log_viewer.rs:531-556 writes 20 lines (~980 bytes total). However, the minimum chunk size is max(remaining_lines * 8192, 32768) = at least 32KB (line 292). Since the entire test file is under 1KB, the file is always read in a single chunk, so the multi-chunk carry logic is never exercised. The test name is misleading. The carry bug (BUG-0001) would only be caught by a test using a file larger than the chunk size, or by mocking the chunk size to be smaller.

Staging: Open in Devin

Was this helpful? React with 👍 or 👎 to provide feedback.

Debug

Playground

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants