Conversation
There was a problem hiding this comment.
Pull request overview
Bumps the app to v0.3.0 (“Major”) and introduces server advertisement injection into Minecraft servers.dat at launch, backed by a new async API fetch helper.
Changes:
- Version/codename bump across frontend, Tauri config, and Rust crate metadata.
- Add
server_adsnetworking module + asyncApi::json_async()and inject fetched servers intoservers.datduring client launch. - Minor UI/i18n fixes (sidebar top offset; RU translation typo).
Reviewed changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| src/i18n/locales/ru.json | Fix RU translation typo for “Clear filters”. |
| src/components/layout/Sidebar.vue | Adjust top sidebar positioning to respect platform-dependent vertical offset. |
| src-tauri/tauri.conf.json | Bump Tauri app version to 0.3.0. |
| src-tauri/src/core/utils/globals.rs | Update release codename to “Major”. |
| src-tauri/src/core/network/server_ads.rs | New module: fetch server ads and inject into servers.dat (custom NBT read/write). |
| src-tauri/src/core/network/mod.rs | Export the new server_ads module. |
| src-tauri/src/core/network/api.rs | Add async json_async() API helper (cached, retrying, multi-server). |
| src-tauri/src/core/clients/client/launch.rs | Fetch ads and inject servers.dat before spawning the client process. |
| src-tauri/Cargo.toml | Bump crate version; add flate2 dependency. |
| src-tauri/Cargo.lock | Lockfile updates for version bump + flate2. |
| package.json | Bump frontend version to 0.3.0. |
| return vec![]; | ||
| } | ||
| let _elem_type = data[pos]; | ||
| pos += 1; |
There was a problem hiding this comment.
_elem_type from the servers TAG_List is ignored. If the list element type isn't TAG_Compound (10), parse_compound_entry will interpret the payload incorrectly and can produce incorrect results. Please validate _elem_type == 10 before iterating, and return a safe empty result (or skip) when it doesn't match.
| pos += 1; | |
| pos += 1; | |
| // Ensure the list element type is TAG_Compound (10) before parsing entries. | |
| if _elem_type != 10 { | |
| return vec![]; | |
| } |
| if api_data.success.is_none() || api_data.data.is_none() { | ||
| return Err( | ||
| "API response does not match required ApiResponse<T> format" | ||
| .to_string(), | ||
| ); | ||
| } | ||
|
|
||
| if api_data.success.unwrap_or(false) { | ||
| let data_value = api_data.data.unwrap(); | ||
| *SERVERS.selected_api.write().unwrap() = Some(server.clone()); | ||
| return Ok(data_value); | ||
| } else { |
There was a problem hiding this comment.
json_async requires both success and data to be present before it even checks whether success is false. If the API returns an error payload like { success: false, error: ... } without data, this will be reported as a format mismatch and will drop the server-provided error message. Consider checking success first: require data only when success == true, and otherwise use the error field when available.
| if api_data.success.is_none() || api_data.data.is_none() { | |
| return Err( | |
| "API response does not match required ApiResponse<T> format" | |
| .to_string(), | |
| ); | |
| } | |
| if api_data.success.unwrap_or(false) { | |
| let data_value = api_data.data.unwrap(); | |
| *SERVERS.selected_api.write().unwrap() = Some(server.clone()); | |
| return Ok(data_value); | |
| } else { | |
| // First, ensure the `success` field is present. | |
| let success = match api_data.success { | |
| Some(s) => s, | |
| None => { | |
| return Err( | |
| "API response missing required `success` field" | |
| .to_string(), | |
| ); | |
| } | |
| }; | |
| if success { | |
| // For successful responses, `data` must be present. | |
| let data_value = match api_data.data { | |
| Some(d) => d, | |
| None => { | |
| return Err( | |
| "API response missing `data` for successful result" | |
| .to_string(), | |
| ); | |
| } | |
| }; | |
| *SERVERS.selected_api.write().unwrap() = Some(server.clone()); | |
| return Ok(data_value); | |
| } else { | |
| // For error responses, use the server-provided `error` message | |
| // when available, without requiring `data` to be present. |
| let servers_dat_path = client_folder.join("servers.dat"); | ||
| let ads = server_ads::fetch_server_ads().await; | ||
| server_ads::inject_servers_dat(&servers_dat_path, &ads); |
There was a problem hiding this comment.
This introduces a new first-time access to the global API via fetch_server_ads() during client launch. API is a LazyLock<Option<Api>> derived from SERVERS.selected_api at initialization time (see core/network/api.rs), while SERVERS.check_servers() runs in a background task during startup. If a user launches a client before the initial server check completes and this becomes the first API access, API can be initialized to None and remain unavailable for the rest of the process. Consider avoiding the API LazyLock here (construct an Api from SERVERS after wait_for_initial_check()), or explicitly await SERVERS.wait_for_initial_check() before fetching ads.
| let count = i32::from_be_bytes([data[pos], data[pos + 1], data[pos + 2], data[pos + 3]]) | ||
| as usize; | ||
| pos += 4; | ||
|
|
||
| let mut servers = Vec::with_capacity(count); | ||
| for _ in 0..count { | ||
| let (entry, next_pos) = parse_compound_entry(data, pos); |
There was a problem hiding this comment.
count is read as an i32 and then cast to usize. If the file is corrupted (e.g., negative count), this will turn into a huge usize, causing a massive allocation in Vec::with_capacity(count) and a long loop. Please validate that count is non-negative and within a reasonable upper bound before allocating/iterating, and bail out safely on invalid values.
| let count = i32::from_be_bytes([ | ||
| data[pos + 1], | ||
| data[pos + 2], | ||
| data[pos + 3], | ||
| data[pos + 4], | ||
| ]) as usize; | ||
| pos += 5; | ||
| for _ in 0..count { |
There was a problem hiding this comment.
In skip_tag_payload for TAG_List (type 9), count is parsed as i32 and cast to usize and then used to drive a loop. A negative or very large count in a malformed file will cause extremely long loops and potentially runaway pos values. Please validate list counts (non-negative, sane max, and consistent with remaining input length) before looping.
| let count = i32::from_be_bytes([ | |
| data[pos + 1], | |
| data[pos + 2], | |
| data[pos + 3], | |
| data[pos + 4], | |
| ]) as usize; | |
| pos += 5; | |
| for _ in 0..count { | |
| let count_i32 = i32::from_be_bytes([ | |
| data[pos + 1], | |
| data[pos + 2], | |
| data[pos + 3], | |
| data[pos + 4], | |
| ]); | |
| // Reject negative counts to avoid wrap-around when casting to usize. | |
| if count_i32 < 0 { | |
| return data.len(); | |
| } | |
| let count = count_i32 as usize; | |
| // Impose a sane upper bound on list size to avoid extremely long loops. | |
| const MAX_NBT_LIST_ITEMS: usize = 1_000_000; | |
| if count > MAX_NBT_LIST_ITEMS { | |
| return data.len(); | |
| } | |
| // Ensure the count is at least consistent with remaining input length. | |
| // After the 5-byte header (elem_type + count), there are this many bytes left. | |
| let remaining_after_header = data.len() - (pos + 5); | |
| if count > remaining_after_header { | |
| return data.len(); | |
| } | |
| pos += 5; | |
| for _ in 0..count { | |
| if pos >= data.len() { | |
| break; | |
| } |
| fn build_nbt(servers: &[(String, String)]) -> Vec<u8> { | ||
| let mut buf = Vec::new(); | ||
|
|
||
| push_tag_header(&mut buf, 10, ""); | ||
|
|
||
| push_tag_header(&mut buf, 9, "servers"); | ||
| buf.push(10); | ||
| let count = servers.len() as i32; | ||
| buf.extend_from_slice(&count.to_be_bytes()); | ||
|
|
||
| for (name, ip) in servers { | ||
| push_tag_header(&mut buf, 8, "name"); | ||
| push_nbt_string(&mut buf, name); | ||
|
|
||
| push_tag_header(&mut buf, 8, "ip"); | ||
| push_nbt_string(&mut buf, ip); | ||
|
|
||
| buf.push(0); | ||
| } |
There was a problem hiding this comment.
build_nbt rewrites each server entry with only name and ip. Any existing per-server metadata in servers.dat (e.g., icons, visibility flags, resource pack settings, etc.) will be lost on the next launch as soon as ads are injected. Consider preserving the full existing compound entries and only prepending/upserting the ad entries, ideally by using an NBT library to round-trip all fields safely.
No description provided.