From e533a25585c1c1191c5623a2d3e18e51ce9bc768 Mon Sep 17 00:00:00 2001 From: Quantum Explorer Date: Thu, 30 Apr 2026 04:13:32 +0800 Subject: [PATCH 1/2] refactor(swift-sdk,platform-wallet): rebuild DashPay/DPNS persistence + identity sync, drop TokenWallet, consolidate persistence trait MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Multi-package refactor covering schema, persistence pipeline, identity sync, and storage explorer. Key threads: ## SwiftData schema (DashSchemaV1, dev stores rebuild) - New models: PersistentDPNSName, PersistentDashpayProfile, PersistentDashpayContactRequest. All non-optional `identity` / `owner` relationships with `#Unique<...>` composite keys matching the corresponding DPNS / DashPay contract uniqueness rules. - PersistentAccount: compound `#Unique` on (wallet, accountType, accountIndex, standardTag, registrationIndex, keyClass, userIdentityId, friendIdentityId). `accountExtendedPubKeyBytes` flipped to `Data?` with `@Attribute(.unique)`. `isWatchOnly` removed (runtime-only state). - PersistentWallet: `isWatchOnly` removed. - PersistentCoreAddress: `txos` cascade-delete (was nullify) so account/wallet teardown drops TXOs cleanly. Persister upserts by Base58Check string so pool refreshes are non-destructive. - PersistentTxo / PersistentTransaction: `txidHex` reverses bytes for canonical block-explorer display (Bitcoin/Dash convention). ## Persistence trait unification PlatformWalletPersistence now has a single `store(changeset)` write path. Dropped legacy `store_account` / `store_wallet_metadata` / `store_account_addresses` methods; their data rides on three new fields on PlatformWalletChangeSet (`wallet_metadata`, `account_registrations`, `account_address_pools`). FFI exposes new `on_persist_wallet_metadata` / `on_persist_account_registrations` / `on_persist_account_address_pools` callbacks dispatched from inside `FFIPersister::store`. ## TokenWallet drop, IdentitySyncManager consolidation - Deleted `TokenWallet` outright. Watch list / balance cache moved to `IdentitySyncManager.state`. Group-query helpers (read-only network queries) became free functions taking `&Sdk`. - Removed `PlatformWalletInfo.token_balances` / `token_watched` and the `watched` / `unwatched` channels on `TokenBalanceChangeSet`. The cache lives in IdentitySyncManager now; per-balance changeset writes still flow through `TokenBalanceChangeSet.balances` / `removed_balances`. - New `IdentitySyncManager` (in `manager/identity_sync.rs`) mirrors `PlatformAddressSyncManager`'s shape: registry + periodic loop, batches up to 100 token ids per `IdentityTokenBalancesQuery`, sequential per identity. Generic over `P: PlatformWalletPersistence` for static dispatch on the hot path. Decoupled from `PlatformWallet` / `WalletManager` — caller drives the registry via `register_identity` / `update_watched_tokens` / `unregister_identity`. ## Manager layout - `identity_sync.rs` and `platform_address_sync.rs` moved into `src/manager/`. Lib re-exports updated. - `spawn_wallet_event_adapter` generic over `P: PlatformWalletPersistence + 'static` (was `Arc`). Caller passes the manager's own `Arc

` for static dispatch. ## DashPay profile / contact requests - IdentityEntryFFI extended with profile fields (display_name / bio / public_message / avatar_url / avatar_hash / avatar_fingerprint). `from_entry` populates; `free_identity_entry_ffi` releases the C-strings. - New `on_persist_contacts_fn` callback + ContactRequestFFI / ContactRequestRemovalFFI types projecting ContactChangeSet.sent / incoming / established / removed_*. `established` projects as two rows per entry (one outgoing, one incoming) so the per-direction Swift unique key upserts cleanly. ## Account-row dedup AccountChangeSetFFI carries the full typed AccountType tags (`type_tag`, `standard_tag`, `registration_index`, `key_class`, `user_identity_id`, `friend_identity_id`). Swift persister keys upsert on those fields instead of the legacy `Debug`-formatted `account_type_name` string. Eliminates duplicate "Standard { … }" rows that appeared next to clean "BIP44 Account #0" rows when the load path and sync changeset both emitted the same account with different name strings. ## TXO graph - `markUtxoSpent` now populates `PersistentTxo.spendingTransaction`. AccountChangeSetFFI.utxos_spent shape changed from `OutPointFFI` to `SpentOutPointFFI { outpoint, spending_txid }` — the spending tx's txid rides through so Swift can resolve and link the parent transaction, populating "Spent By" in the storage explorer. - `PersistentTxo.coreAddress` backfilled inside `persistAccountAddresses`: when an address row is upserted, any TXO at that Base58Check with a nil `coreAddress` link gets attached. Closes the race where SPV emits a UTXO before the address-pool row lands. ## Wallet recovery + identity discovery `register_wallet` now calls `identity().sync()` after platform-address init. For a recovery flow (existing mnemonic re-typed) this hydrates every identity the wallet had on Platform without an explicit "Re-scan" step. Failures are logged but never block wallet registration. The "Search Wallets for Identities" UI was renamed to "Re-scan for Identities" to reflect its new role as a refresh/retry rather than the only discovery path. ## Per-wallet TransactionListView push Was stalling on iOS 26 even on empty wallets because closure- based NavigationLink re-runs the destination's `init` (and its `@Query` registration) on every parent body invocation — hundreds of registrations during sync. Switched both pushes in the wallets stack to value-based `NavigationLink(value:)` + `.navigationDestination(for:)` on `WalletsContentView`. Also moved the per-wallet TX query to query `PersistentTransaction` directly with a relationship-traversal predicate (`tx.outputs.contains { walletId == X } || tx.inputs.contains { ... }`), sorted in SQLite — no Swift dedupe, no fault chain. ## Storage Explorer - New list + detail views for the three new persistent models. - Comprehensive audit + fill across every other detail view — PersistentToken (huge expansion incl. all 9 ChangeControlRules / distributions / localizations), PersistentDocument (block heights / payload sizes), PersistentDataContract (keywords / blob sizes), etc. - PersistentPlatformAddressesSyncStateStorageDetailView fix: was missing the `walletId` scope key entirely. - TXO detail: Address moved from Core to Relationships and becomes a NavigationLink to the address detail when `coreAddress` is linked. Removed redundant "Address Row". - Core Addresses list view gets a system search bar (`.searchable`) over Base58Check / derivation path / address index. ## Core balance display (Option A: derive from TXOs) - BalanceCardView and AccountListView read from a single shared `@Query` in `WalletDetailView` filtered by walletId denorm. Wallet-level confirmed/unconfirmed balance partitions the result; per-account balance further partitions by address-membership in the account's pool. Address-keyed index built once per render so per-account lookup is O(pool size) not O(walletTxos). - One subscription instead of three across the view tree — cuts SwiftData change-tracking work per TXO insert by ~3×. - Known issue: per-account row balances can read 0 in some states (deferred — see follow-up). ## Misc - `core_bridge.rs` doc comment fixed (was referencing a non-existent `WalletEventAdapter` struct and miscrediting an upstream change). - DPNS registration fix: `RegisterNameView` passes the user- typed display label to `wallet.registerDpnsName`, not the homograph-normalized form. Names registered before this fix have the normalized form locked in on Platform; can't be retroactively repaired (DPNS labels aren't editable). Co-Authored-By: Claude Opus 4.7 (1M context) --- .../src/contact_persistence.rs | 390 ++++++++ .../src/core_address_types.rs | 14 +- .../src/core_wallet_types.rs | 181 +++- .../src/identity_persistence.rs | 486 +++++++++- .../src/identity_sync.rs | 537 +++++++++++ packages/rs-platform-wallet-ffi/src/lib.rs | 6 + .../src/memory_explorer.rs | 20 +- .../rs-platform-wallet-ffi/src/persistence.rs | 639 +++++++++---- .../src/token_persistence.rs | 22 +- .../src/tokens/group_queries.rs | 41 +- .../rs-platform-wallet-ffi/src/tokens/mod.rs | 2 - .../rs-platform-wallet-ffi/src/tokens/sync.rs | 97 -- .../src/wallet_registration_persistence.rs | 85 ++ .../src/wallet_restore_types.rs | 6 +- .../rs-platform-wallet-ffi/src/xpub_render.rs | 2 +- packages/rs-platform-wallet/Cargo.toml | 3 + .../src/changeset/changeset.rs | 169 +++- .../src/changeset/core_bridge.rs | 48 +- .../rs-platform-wallet/src/changeset/mod.rs | 10 +- .../src/changeset/traits.rs | 67 -- packages/rs-platform-wallet/src/events.rs | 4 +- packages/rs-platform-wallet/src/lib.rs | 11 +- .../src/manager/accessors.rs | 23 +- .../src/manager/identity_sync.rs | 841 ++++++++++++++++++ .../rs-platform-wallet/src/manager/load.rs | 4 +- .../rs-platform-wallet/src/manager/mod.rs | 34 +- .../{ => manager}/platform_address_sync.rs | 0 .../src/manager/wallet_lifecycle.rs | 120 +-- .../rs-platform-wallet/src/wallet/apply.rs | 93 +- .../src/wallet/identity/network/mod.rs | 3 +- packages/rs-platform-wallet/src/wallet/mod.rs | 1 - .../src/wallet/platform_wallet.rs | 29 +- .../src/wallet/platform_wallet_traits.rs | 4 - .../src/wallet/tokens/group_queries.rs | 295 +++--- .../src/wallet/tokens/mod.rs | 20 +- .../src/wallet/tokens/wallet.rs | 290 ------ .../Persistence/DashModelContainer.swift | 38 + .../Models/PersistentAccount.swift | 36 +- .../Models/PersistentCoreAddress.swift | 17 +- .../Models/PersistentDPNSName.swift | 150 ++++ .../PersistentDashpayContactRequest.swift | 174 ++++ .../Models/PersistentDashpayProfile.swift | 130 +++ .../Models/PersistentIdentity.swift | 43 + .../Models/PersistentPlatformAddress.swift | 4 +- .../ManagedPlatformWallet.swift | 10 +- .../PlatformWalletManagerIdentitySync.swift | 320 +++++++ .../PlatformWalletPersistenceHandler.swift | 808 +++++++++++++++-- .../PlatformWallet/Tokens/TokenActions.swift | 80 +- .../Core/Views/AccountListView.swift | 74 +- .../Core/Views/IdentitiesContentView.swift | 2 +- .../Core/Views/ReceiveAddressView.swift | 2 +- .../Core/Views/WalletDetailView.swift | 28 +- .../Views/IdentityDetailView.swift | 104 ++- .../Views/RegisterNameView.swift | 29 +- .../SearchWalletsForIdentitiesView.swift | 4 +- .../Views/StorageExplorerView.swift | 24 + .../Views/StorageModelListViews.swift | 176 +++- .../Views/StorageRecordDetailViews.swift | 829 ++++++++++++++++- .../Views/WalletMemoryExplorerView.swift | 7 +- 59 files changed, 6320 insertions(+), 1366 deletions(-) create mode 100644 packages/rs-platform-wallet-ffi/src/contact_persistence.rs create mode 100644 packages/rs-platform-wallet-ffi/src/identity_sync.rs delete mode 100644 packages/rs-platform-wallet-ffi/src/tokens/sync.rs create mode 100644 packages/rs-platform-wallet-ffi/src/wallet_registration_persistence.rs create mode 100644 packages/rs-platform-wallet/src/manager/identity_sync.rs rename packages/rs-platform-wallet/src/{ => manager}/platform_address_sync.rs (100%) delete mode 100644 packages/rs-platform-wallet/src/wallet/tokens/wallet.rs create mode 100644 packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDPNSName.swift create mode 100644 packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayContactRequest.swift create mode 100644 packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayProfile.swift create mode 100644 packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerIdentitySync.swift diff --git a/packages/rs-platform-wallet-ffi/src/contact_persistence.rs b/packages/rs-platform-wallet-ffi/src/contact_persistence.rs new file mode 100644 index 00000000000..f08d3b41c77 --- /dev/null +++ b/packages/rs-platform-wallet-ffi/src/contact_persistence.rs @@ -0,0 +1,390 @@ +//! FFI types + helpers for forwarding +//! [`ContactChangeSet`](platform_wallet::changeset::ContactChangeSet) +//! out of [`FFIPersister`](crate::persistence::FFIPersister) to Swift. +//! +//! `ContactChangeSet` is a top-level (not per-identity) changeset +//! carrying sent / incoming / removed-sent / removed-incoming / +//! established contact requests. The Swift mirror is one row per +//! `(networkRaw, owner_id, contact_id, is_outgoing)` quad in +//! `PersistentDashpayContactRequest` — outgoing and incoming rows for +//! the same `(owner, contact)` pair coexist as distinct rows because +//! the encrypted payload differs per direction. +//! +//! ## Wire shape +//! +//! - Upserts ride as a single flat array of [`ContactRequestFFI`] +//! regardless of which underlying field on `ContactChangeSet` they +//! came from. Each row carries an explicit `is_outgoing` bit so the +//! Swift handler can route it to the correct uniqueness bucket. +//! - Removals split into two parallel arrays — sent vs incoming — to +//! match the two `BTreeSet<...Key>` fields on `ContactChangeSet` +//! and so the Swift handler can delete the right `is_outgoing` row +//! without ambiguity. +//! +//! ## `established` projection +//! +//! Each `EstablishedContact` carries both the outgoing and the +//! incoming `ContactRequest` that built it, so the persister projects +//! the established map as **two** [`ContactRequestFFI`] rows per entry +//! (one with `is_outgoing == true`, one with `is_outgoing == false`). +//! The unique constraint on the Swift side means these upsert cleanly +//! over any prior sent / incoming row for the same `(owner, contact)` +//! pair — establishment promotes the row in place rather than +//! requiring an explicit tombstone. +//! +//! ## Ownership +//! +//! Each [`ContactRequestFFI`] owns its `encrypted_public_key`, +//! `encrypted_account_label`, and `auto_accept_proof` byte buffers +//! (heap-allocated via `Box::into_raw`). [`free_contact_requests_ffi`] +//! releases every allocation across an array — the persister +//! callsite calls it in a final loop after the Swift handler returns. + +use std::os::raw::c_void; +use std::ptr; + +/// Flat C mirror of a single contact request entry — used for both +/// pending (`sent_requests` / `incoming_requests`) and established +/// (`established`) cases. +/// +/// `owner_id` is the wallet-owned identity (the [`ManagedIdentity`] +/// owner the request belongs to). `contact_id` is the other party. +/// The direction bit `is_outgoing` distinguishes "owner sent this +/// request to contact" from "contact sent this request to owner". +/// +/// Per-direction key indices, account reference, and the encrypted +/// payload are carried straight through from +/// [`ContactRequest`](platform_wallet::ContactRequest). +/// +/// [`ManagedIdentity`]: platform_wallet::ManagedIdentity +#[repr(C)] +pub struct ContactRequestFFI { + /// Owning identity (the wallet's identity). For sent / outgoing + /// rows this is the `sender_id`; for incoming rows this is the + /// `recipient_id`. + pub owner_id: [u8; 32], + /// The other party's identity. Mirror image of `owner_id`. + pub contact_id: [u8; 32], + /// Direction bit. `true` ⇒ owner sent to contact (the underlying + /// `ContactRequest` has `sender_id == owner_id`). `false` ⇒ + /// contact sent to owner. + pub is_outgoing: bool, + /// `ContactRequest::sender_key_index` — index of the sender's + /// identity public key used for the ECDH that encrypted the + /// payload. + pub sender_key_index: u32, + /// `ContactRequest::recipient_key_index`. + pub recipient_key_index: u32, + /// `ContactRequest::account_reference`. + pub account_reference: u32, + /// Heap-allocated copy of `ContactRequest::encrypted_public_key`. + /// Released by [`free_contact_requests_ffi`]. + pub encrypted_public_key: *const u8, + /// Length of [`Self::encrypted_public_key`] in bytes. + pub encrypted_public_key_len: usize, + /// Heap-allocated copy of `ContactRequest::encrypted_account_label`, + /// or `null` when the source `Option` was `None`. Released by + /// [`free_contact_requests_ffi`]. + pub encrypted_account_label: *const u8, + /// Length of [`Self::encrypted_account_label`] in bytes; `0` + /// when the pointer is null. + pub encrypted_account_label_len: usize, + /// Heap-allocated copy of `ContactRequest::auto_accept_proof`, + /// or `null` when the source `Option` was `None`. Released by + /// [`free_contact_requests_ffi`]. + pub auto_accept_proof: *const u8, + /// Length of [`Self::auto_accept_proof`] in bytes; `0` when the + /// pointer is null. + pub auto_accept_proof_len: usize, + /// `ContactRequest::core_height_created_at` — the Core block + /// height when the request landed on Platform. + pub core_height_created_at: u32, + /// `ContactRequest::created_at` — Unix-millis timestamp. + pub created_at: u64, +} + +/// Composite identifier for [`ContactChangeSet::removed_sent`] and +/// [`ContactChangeSet::removed_incoming`] entries on the FFI boundary. +/// +/// A flat `[u8; 32]` pair so Swift can iterate an array directly +/// without a secondary indirection. `owner_id` is always the +/// wallet-owned identity (per the changeset's keyed-by-owner +/// invariant); `contact_id` is the other party (recipient for sent, +/// sender for incoming). +/// +/// [`ContactChangeSet::removed_sent`]: platform_wallet::changeset::ContactChangeSet::removed_sent +/// [`ContactChangeSet::removed_incoming`]: platform_wallet::changeset::ContactChangeSet::removed_incoming +#[repr(C)] +#[derive(Debug, Clone, Copy)] +pub struct ContactRequestRemovalFFI { + pub owner_id: [u8; 32], + pub contact_id: [u8; 32], +} + +// Compile-time guards. Pin the expected layouts so any reshape on +// the Rust side fails the cargo build before it can ship a dylib +// the Swift side will mis-parse at runtime. +// +// Expected `ContactRequestFFI` layout on 64-bit targets: +// +// 0..=31 owner_id [u8; 32] +// 32..=63 contact_id [u8; 32] +// 64 is_outgoing bool +// 65..=67 (padding to 4) +// 68..=71 sender_key_index u32 +// 72..=75 recipient_key_index u32 +// 76..=79 account_reference u32 +// 80..=87 encrypted_public_key *const u8 +// 88..=95 encrypted_public_key_len usize +// 96..=103 encrypted_account_label *const u8 +// 104..=111 encrypted_account_label_len usize +// 112..=119 auto_accept_proof *const u8 +// 120..=127 auto_accept_proof_len usize +// 128..=131 core_height_created_at u32 +// 132..=135 (padding to 8) +// 136..=143 created_at u64 +// +// Total size = 144, alignment = 8 (from u64 / pointer fields). +const _: [u8; 144] = [0u8; std::mem::size_of::()]; +const _: [u8; 8] = [0u8; std::mem::align_of::()]; + +// Expected `ContactRequestRemovalFFI` layout: 64 bytes, alignment 1. +const _: [u8; 64] = [0u8; std::mem::size_of::()]; +const _: [u8; 1] = [0u8; std::mem::align_of::()]; + +// --------------------------------------------------------------------------- +// Conversions +// --------------------------------------------------------------------------- + +impl ContactRequestFFI { + /// Build a `ContactRequestFFI` from a [`ContactRequest`] for the + /// outgoing direction (owner sent the request to contact). The + /// `owner_id` and `is_outgoing == true` are stamped from the + /// caller; the rest of the fields come straight from the request. + /// + /// Heap-allocates the three byte payloads (`encrypted_public_key` + /// always, the optional `encrypted_account_label` and + /// `auto_accept_proof` when `Some(_)`). Released by + /// [`free_contact_requests_ffi`]. + /// + /// [`ContactRequest`]: platform_wallet::ContactRequest + pub fn from_outgoing( + owner_id: [u8; 32], + contact_id: [u8; 32], + request: &platform_wallet::ContactRequest, + ) -> Self { + Self::from_parts(owner_id, contact_id, true, request) + } + + /// Sibling of [`Self::from_outgoing`] for the incoming direction + /// (contact sent the request to owner). `is_outgoing == false`. + pub fn from_incoming( + owner_id: [u8; 32], + contact_id: [u8; 32], + request: &platform_wallet::ContactRequest, + ) -> Self { + Self::from_parts(owner_id, contact_id, false, request) + } + + fn from_parts( + owner_id: [u8; 32], + contact_id: [u8; 32], + is_outgoing: bool, + request: &platform_wallet::ContactRequest, + ) -> Self { + let (encrypted_public_key, encrypted_public_key_len) = + allocate_byte_buffer(&request.encrypted_public_key); + let (encrypted_account_label, encrypted_account_label_len) = + match request.encrypted_account_label.as_deref() { + Some(bytes) => allocate_byte_buffer(bytes), + None => (ptr::null(), 0), + }; + let (auto_accept_proof, auto_accept_proof_len) = match request.auto_accept_proof.as_deref() + { + Some(bytes) => allocate_byte_buffer(bytes), + None => (ptr::null(), 0), + }; + Self { + owner_id, + contact_id, + is_outgoing, + sender_key_index: request.sender_key_index, + recipient_key_index: request.recipient_key_index, + account_reference: request.account_reference, + encrypted_public_key, + encrypted_public_key_len, + encrypted_account_label, + encrypted_account_label_len, + auto_accept_proof, + auto_accept_proof_len, + core_height_created_at: request.core_height_created_at, + created_at: request.created_at, + } + } +} + +/// Heap-allocate a `Box<[u8]>` from `bytes` and return a `(ptr, len)` +/// pair owned by the caller. Empty slices return `(null, 0)` so the +/// receiver can avoid an empty allocation walk; the matching free +/// helper checks the length before reclaiming. +fn allocate_byte_buffer(bytes: &[u8]) -> (*const u8, usize) { + if bytes.is_empty() { + return (ptr::null(), 0); + } + let boxed: Box<[u8]> = bytes.to_vec().into_boxed_slice(); + let len = boxed.len(); + (Box::into_raw(boxed) as *const u8, len) +} + +// --------------------------------------------------------------------------- +// Destructors +// --------------------------------------------------------------------------- + +/// Release every heap allocation owned by an array of +/// [`ContactRequestFFI`] rows produced by [`ContactRequestFFI::from_outgoing`] +/// / [`ContactRequestFFI::from_incoming`]. +/// +/// Idempotent on a per-row basis: each pointer is checked for null +/// before reclaim and nulled afterwards. +/// +/// # Safety +/// +/// `entries` must point to `count` contiguous [`ContactRequestFFI`] +/// values produced by this module's allocators and not previously +/// freed. Mixing in pointers Swift owns (or pointers from a different +/// allocator) will corrupt the heap. +pub unsafe fn free_contact_requests_ffi(entries: *mut ContactRequestFFI, count: usize) { + if entries.is_null() || count == 0 { + return; + } + let slice = unsafe { std::slice::from_raw_parts_mut(entries, count) }; + for entry in slice.iter_mut() { + free_byte_buffer( + &mut entry.encrypted_public_key, + &mut entry.encrypted_public_key_len, + ); + free_byte_buffer( + &mut entry.encrypted_account_label, + &mut entry.encrypted_account_label_len, + ); + free_byte_buffer( + &mut entry.auto_accept_proof, + &mut entry.auto_accept_proof_len, + ); + } +} + +/// Reclaim a `Box<[u8]>` previously published via +/// [`allocate_byte_buffer`]. Idempotent on null / zero-length slots. +fn free_byte_buffer(slot: &mut *const u8, len_slot: &mut usize) { + if !slot.is_null() && *len_slot > 0 { + let slice = unsafe { std::slice::from_raw_parts_mut(*slot as *mut u8, *len_slot) }; + let _ = unsafe { Box::from_raw(slice as *mut [u8]) }; + } + *slot = ptr::null(); + *len_slot = 0; +} + +// --------------------------------------------------------------------------- +// Callback signature +// --------------------------------------------------------------------------- + +/// C-ABI function pointer type for the contact persistence callback. +/// Defined as a typedef so [`crate::persistence::PersistenceCallbacks`] +/// stays terse. +/// +/// Parameters: +/// - `ctx`: opaque context pointer set by the FFI consumer. +/// - `wallet_id`: 32-byte wallet identifier scoping this changeset +/// (matches the parameter on every other per-kind callback). Used +/// by the Swift side to resolve the network for the contact rows. +/// - `upserts` / `upserts_count`: rows to insert-or-refresh, with the +/// per-row `is_outgoing` bit determining which direction the row +/// covers. Pointer is valid only for the duration of the callback. +/// - `removed_sent` / `removed_sent_count`: tombstones for outgoing +/// rows (sent requests explicitly removed by the owner). +/// - `removed_incoming` / `removed_incoming_count`: tombstones for +/// incoming rows. +/// +/// Return code: `0` on success, non-zero to flag the round as failed +/// for the bracketing changeset begin/end transaction. +pub type OnPersistContactsFn = unsafe extern "C" fn( + ctx: *mut c_void, + wallet_id: *const u8, + upserts: *const ContactRequestFFI, + upserts_count: usize, + removed_sent: *const ContactRequestRemovalFFI, + removed_sent_count: usize, + removed_incoming: *const ContactRequestRemovalFFI, + removed_incoming_count: usize, +) -> i32; + +#[cfg(test)] +mod tests { + use super::*; + use platform_wallet::ContactRequest; + + fn sample_request() -> ContactRequest { + ContactRequest { + sender_id: dpp::prelude::Identifier::from([1u8; 32]), + recipient_id: dpp::prelude::Identifier::from([2u8; 32]), + sender_key_index: 7, + recipient_key_index: 9, + account_reference: 11, + encrypted_account_label: Some(vec![0xAA, 0xBB, 0xCC]), + encrypted_public_key: vec![0x01; 96], + auto_accept_proof: Some(vec![0xDE, 0xAD, 0xBE, 0xEF]), + core_height_created_at: 100_000, + created_at: 1_700_000_000_000, + } + } + + #[test] + fn test_from_outgoing_round_trip() { + let request = sample_request(); + let owner = [3u8; 32]; + let contact = [4u8; 32]; + let mut ffi = ContactRequestFFI::from_outgoing(owner, contact, &request); + assert_eq!(ffi.owner_id, owner); + assert_eq!(ffi.contact_id, contact); + assert!(ffi.is_outgoing); + assert_eq!(ffi.sender_key_index, 7); + assert_eq!(ffi.recipient_key_index, 9); + assert_eq!(ffi.account_reference, 11); + assert_eq!(ffi.encrypted_public_key_len, 96); + let pk = unsafe { + std::slice::from_raw_parts(ffi.encrypted_public_key, ffi.encrypted_public_key_len) + }; + assert_eq!(pk, &[0x01; 96]); + assert_eq!(ffi.encrypted_account_label_len, 3); + let label = unsafe { + std::slice::from_raw_parts(ffi.encrypted_account_label, ffi.encrypted_account_label_len) + }; + assert_eq!(label, &[0xAA, 0xBB, 0xCC]); + assert_eq!(ffi.auto_accept_proof_len, 4); + assert_eq!(ffi.core_height_created_at, 100_000); + assert_eq!(ffi.created_at, 1_700_000_000_000); + + unsafe { free_contact_requests_ffi(&mut ffi as *mut ContactRequestFFI, 1) }; + assert!(ffi.encrypted_public_key.is_null()); + assert_eq!(ffi.encrypted_public_key_len, 0); + assert!(ffi.encrypted_account_label.is_null()); + assert!(ffi.auto_accept_proof.is_null()); + // Idempotent — second call must not double-free. + unsafe { free_contact_requests_ffi(&mut ffi as *mut ContactRequestFFI, 1) }; + } + + #[test] + fn test_from_incoming_no_optional_payloads() { + let mut request = sample_request(); + request.encrypted_account_label = None; + request.auto_accept_proof = None; + let mut ffi = ContactRequestFFI::from_incoming([5u8; 32], [6u8; 32], &request); + assert!(!ffi.is_outgoing); + assert!(ffi.encrypted_account_label.is_null()); + assert_eq!(ffi.encrypted_account_label_len, 0); + assert!(ffi.auto_accept_proof.is_null()); + assert_eq!(ffi.auto_accept_proof_len, 0); + unsafe { free_contact_requests_ffi(&mut ffi as *mut ContactRequestFFI, 1) }; + } +} diff --git a/packages/rs-platform-wallet-ffi/src/core_address_types.rs b/packages/rs-platform-wallet-ffi/src/core_address_types.rs index 402fecfddd2..48eb5903279 100644 --- a/packages/rs-platform-wallet-ffi/src/core_address_types.rs +++ b/packages/rs-platform-wallet-ffi/src/core_address_types.rs @@ -1,11 +1,13 @@ //! C-compatible types for Core (on-chain) address pool persistence. //! -//! `on_persist_account_addresses_fn` fires when a wallet's on-chain -//! address pool changes — initial population on wallet create, pool -//! extension after `next_unused`, and per-address `used` flips when -//! SPV sees activity. Swift persists each entry into SwiftData -//! (`PersistentCoreAddress`) so the Storage Explorer can render -//! derivation paths + pubkeys reactively via `@Query`. +//! `on_persist_account_address_pools_fn` fires when a wallet's +//! on-chain address pool changes — initial population on wallet +//! create, pool extension after `next_unused`, and per-address +//! `used` flips when SPV sees activity. Each +//! `AccountAddressPoolFFI` entry in the round carries a slice of +//! these per-address rows. Swift persists each entry into +//! SwiftData (`PersistentCoreAddress`) so the Storage Explorer +//! can render derivation paths + pubkeys reactively via `@Query`. use std::os::raw::c_char; diff --git a/packages/rs-platform-wallet-ffi/src/core_wallet_types.rs b/packages/rs-platform-wallet-ffi/src/core_wallet_types.rs index c5f974d924e..e9115dfa644 100644 --- a/packages/rs-platform-wallet-ffi/src/core_wallet_types.rs +++ b/packages/rs-platform-wallet-ffi/src/core_wallet_types.rs @@ -14,6 +14,19 @@ pub struct OutPointFFI { pub vout: u32, } +/// Outpoint of a TXO that was spent, paired with the spending +/// transaction's txid. Replaces the bare `OutPointFFI` on +/// `AccountChangeSetFFI.utxos_spent` so the Swift persister can +/// populate `PersistentTxo.spendingTransaction` (the column that +/// drives "Spent By" in the storage explorer and any per-tx +/// drill-down from the spent side of the chain). +#[repr(C)] +#[derive(Debug, Clone, Copy)] +pub struct SpentOutPointFFI { + pub outpoint: OutPointFFI, + pub spending_txid: [u8; 32], +} + // --------------------------------------------------------------------------- // Chain state // --------------------------------------------------------------------------- @@ -86,15 +99,44 @@ pub struct TransactionRecordFFI { #[repr(C)] pub struct AccountChangeSetFFI { - /// Account type name (Debug format of AccountType). + /// Account type name. Currently emitted as the `Debug` form of + /// `AccountType` (e.g. `"Standard { index: 0, + /// standard_account_type: BIP44Account }"`); kept for one extra + /// release so any caller still string-matching against it + /// doesn't break, but **not** used for upsert identity any more + /// — Swift derives the display name from the typed tag fields + /// below via the same helper the load path uses, so a single + /// canonical name appears in the SwiftData row regardless of + /// which path emitted it. pub account_type_name: *mut c_char, /// Account index (for indexed types, 0 otherwise). pub account_index: u32, + /// `AccountType` discriminant. Stable across releases — the + /// Swift persister keys upsert on `(wallet_id, type_tag, + /// account_index, ...)` rather than on the legacy `Debug` + /// `account_type_name` string, so a load-path emit and a + /// changeset-path emit for the same account collapse onto a + /// single SwiftData row. + pub type_tag: crate::wallet_restore_types::AccountTypeTagFFI, + /// Sub-discriminant for `type_tag == Standard`. Splits BIP44 + /// (0) from BIP32 (1). `Bip44` for non-Standard variants + /// (ignored by Swift in that case). + pub standard_tag: crate::wallet_restore_types::StandardAccountTypeTagFFI, + /// `IdentityTopUp.registration_index`. `0` for other variants. + pub registration_index: u32, + /// `PlatformPayment.key_class`. `0` for other variants. + pub key_class: u32, + /// `Dashpay*.user_identity_id` (32 bytes). Zeroed for non- + /// Dashpay variants. + pub user_identity_id: [u8; 32], + /// `Dashpay*.friend_identity_id` (32 bytes). Zeroed for non- + /// Dashpay variants. + pub friend_identity_id: [u8; 32], /// UTXOs added. pub utxos_added: *mut UtxoEntryFFI, pub utxos_added_count: usize, /// Outpoints of UTXOs spent. - pub utxos_spent: *mut OutPointFFI, + pub utxos_spent: *mut SpentOutPointFFI, pub utxos_spent_count: usize, /// Outpoints that became InstantSend-locked. pub utxos_instant_locked: *mut OutPointFFI, @@ -233,7 +275,7 @@ impl WalletChangeSetFFI { // output_details; we walk them once per record to project // the UTXOs the persister should add or remove. let mut utxos_added: Vec = Vec::new(); - let mut utxos_spent: Vec = Vec::new(); + let mut utxos_spent: Vec = Vec::new(); for rec in &recs { utxos_added.extend(record_new_utxos_ffi(rec)); utxos_spent.extend(record_spent_outpoints_ffi(rec)); @@ -247,9 +289,23 @@ impl WalletChangeSetFFI { let utxos_spent_count = utxos_spent.len(); let transactions_count = transactions.len(); + // Project the typed `AccountType` into the same flat tag + // layout the load path's `AccountSpecFFI` already uses. + // The Swift persister upserts on these typed fields + // rather than on the legacy `Debug`-formatted + // `account_type_name` string, so a load-path emit and a + // sync-path emit for the same account collapse onto a + // single SwiftData row. + let tags = account_type_to_tags(&account_type); ffi_accounts.push(AccountChangeSetFFI { account_type_name: type_name.into_raw(), account_index, + type_tag: tags.type_tag, + standard_tag: tags.standard_tag, + registration_index: tags.registration_index, + key_class: tags.key_class, + user_identity_id: tags.user_identity_id, + friend_identity_id: tags.friend_identity_id, utxos_added: vec_to_ptr(utxos_added), utxos_added_count, utxos_spent: vec_to_ptr(utxos_spent), @@ -305,6 +361,108 @@ fn account_index_of(at: &key_wallet::account::AccountType) -> u32 { } } +/// Subset of [`crate::wallet_restore_types::AccountSpecFFI`] carrying +/// only the tag/discriminator fields — no xpub. Used by the +/// changeset emit path to populate +/// [`AccountChangeSetFFI`]'s typed tags so the Swift persister can +/// upsert on the same composite key the load path uses. +struct AccountChangeSetTags { + type_tag: crate::wallet_restore_types::AccountTypeTagFFI, + standard_tag: crate::wallet_restore_types::StandardAccountTypeTagFFI, + registration_index: u32, + key_class: u32, + user_identity_id: [u8; 32], + friend_identity_id: [u8; 32], +} + +/// Project an upstream [`AccountType`] into the flat FFI tag layout. +/// +/// Mirrors [`build_account_spec_ffi`](crate::persistence::build_account_spec_ffi)'s +/// match arms but emits only the tag/discriminator fields — the +/// xpub is load-path-only and not relevant on the changeset emit +/// path. +fn account_type_to_tags(at: &key_wallet::account::AccountType) -> AccountChangeSetTags { + use crate::wallet_restore_types::{AccountTypeTagFFI, StandardAccountTypeTagFFI}; + use key_wallet::account::{AccountType, StandardAccountType}; + let mut tags = AccountChangeSetTags { + type_tag: AccountTypeTagFFI::Standard, + standard_tag: StandardAccountTypeTagFFI::Bip44, + registration_index: 0, + key_class: 0, + user_identity_id: [0u8; 32], + friend_identity_id: [0u8; 32], + }; + match at { + AccountType::Standard { + standard_account_type, + .. + } => { + tags.type_tag = AccountTypeTagFFI::Standard; + tags.standard_tag = match standard_account_type { + StandardAccountType::BIP44Account => StandardAccountTypeTagFFI::Bip44, + StandardAccountType::BIP32Account => StandardAccountTypeTagFFI::Bip32, + }; + } + AccountType::CoinJoin { .. } => { + tags.type_tag = AccountTypeTagFFI::CoinJoin; + } + AccountType::IdentityRegistration => { + tags.type_tag = AccountTypeTagFFI::IdentityRegistration; + } + AccountType::IdentityTopUp { registration_index } => { + tags.type_tag = AccountTypeTagFFI::IdentityTopUp; + tags.registration_index = *registration_index; + } + AccountType::IdentityTopUpNotBoundToIdentity => { + tags.type_tag = AccountTypeTagFFI::IdentityTopUpNotBoundToIdentity; + } + AccountType::IdentityInvitation => { + tags.type_tag = AccountTypeTagFFI::IdentityInvitation; + } + AccountType::AssetLockAddressTopUp => { + tags.type_tag = AccountTypeTagFFI::AssetLockAddressTopUp; + } + AccountType::AssetLockShieldedAddressTopUp => { + tags.type_tag = AccountTypeTagFFI::AssetLockShieldedAddressTopUp; + } + AccountType::ProviderVotingKeys => { + tags.type_tag = AccountTypeTagFFI::ProviderVotingKeys; + } + AccountType::ProviderOwnerKeys => { + tags.type_tag = AccountTypeTagFFI::ProviderOwnerKeys; + } + AccountType::ProviderOperatorKeys => { + tags.type_tag = AccountTypeTagFFI::ProviderOperatorKeys; + } + AccountType::ProviderPlatformKeys => { + tags.type_tag = AccountTypeTagFFI::ProviderPlatformKeys; + } + AccountType::DashpayReceivingFunds { + user_identity_id, + friend_identity_id, + .. + } => { + tags.type_tag = AccountTypeTagFFI::DashpayReceivingFunds; + tags.user_identity_id = *user_identity_id; + tags.friend_identity_id = *friend_identity_id; + } + AccountType::DashpayExternalAccount { + user_identity_id, + friend_identity_id, + .. + } => { + tags.type_tag = AccountTypeTagFFI::DashpayExternalAccount; + tags.user_identity_id = *user_identity_id; + tags.friend_identity_id = *friend_identity_id; + } + AccountType::PlatformPayment { key_class, .. } => { + tags.type_tag = AccountTypeTagFFI::PlatformPayment; + tags.key_class = *key_class; + } + } + tags +} + /// Project the "ours" outputs of a `TransactionRecord` into FFI UTXO /// entries. Mirrors `derive_new_utxos` in /// `platform_wallet::changeset::core_bridge` but stops one layer @@ -362,19 +520,26 @@ fn record_new_utxos_ffi( } /// Project the outpoints spent by a `TransactionRecord` (i.e. the -/// outpoints whose UTXO rows the persister should delete). +/// outpoints whose UTXO rows the persister should mark spent), +/// paired with the spending transaction's txid so the Swift +/// persister can populate `PersistentTxo.spendingTransaction`. fn record_spent_outpoints_ffi( rec: &key_wallet::managed_account::transaction_record::TransactionRecord, -) -> Vec { +) -> Vec { + let mut spending_txid = [0u8; 32]; + spending_txid.copy_from_slice(rec.txid.as_ref()); rec.input_details .iter() .filter_map(|d| { let input = rec.transaction.input.get(d.index as usize)?; let mut txid = [0u8; 32]; txid.copy_from_slice(input.previous_output.txid.as_ref()); - Some(OutPointFFI { - txid, - vout: input.previous_output.vout, + Some(SpentOutPointFFI { + outpoint: OutPointFFI { + txid, + vout: input.previous_output.vout, + }, + spending_txid, }) }) .collect() diff --git a/packages/rs-platform-wallet-ffi/src/identity_persistence.rs b/packages/rs-platform-wallet-ffi/src/identity_persistence.rs index 87efa67e1bf..a4296824b28 100644 --- a/packages/rs-platform-wallet-ffi/src/identity_persistence.rs +++ b/packages/rs-platform-wallet-ffi/src/identity_persistence.rs @@ -18,6 +18,8 @@ //! helper for every entry before returning — Swift must consume //! whatever it needs to persist before returning from the callback. +use std::ffi::CString; +use std::os::raw::c_char; use std::ptr; use platform_wallet::changeset::{IdentityEntry, IdentityKeyEntry}; @@ -25,21 +27,34 @@ use platform_wallet::changeset::{IdentityEntry, IdentityKeyEntry}; // `IdentityStatus` discriminants are mirrored on the Swift side. Keep // this encoding in sync with the `repr(u8)` order in // `platform-wallet/src/wallet/identity/types/key_storage.rs`. -use platform_wallet::IdentityStatus; +use platform_wallet::{DashPayProfile, IdentityStatus}; /// Flat C mirror of [`IdentityEntry`]'s persistable scalars. /// /// Public keys are NOT included here — they travel in /// [`IdentityKeyEntryFFI`] alongside their derivation breadcrumb via /// a separate callback. Fields that don't map onto the Swift schema -/// (block times, DPNS names, DashPay profile/payments) are skipped; -/// DashPay overlays already ride on the dedicated -/// `dashpay_profiles` / `dashpay_payments_overlay` surfaces on the -/// parent changeset. +/// (block times, contested DPNS names, DashPay payments) are skipped; +/// DashPay payment overlays already ride on the dedicated +/// `dashpay_payments_overlay` surface on the parent changeset. /// /// User-visible label is no longer carried — `ManagedIdentity` doesn't /// have one, and Swift owns the `PersistentIdentity.alias` column /// directly. Removed entirely so the FFI layout stays minimal. +/// +/// Settled DPNS labels DO ride on this struct (heap-allocated, freed +/// in [`free_identity_entry_ffi`]) so the Swift persister can +/// upsert/cascade them onto a `PersistentDPNSName` row collection +/// owned by the parent `PersistentIdentity`. Contested labels are +/// deliberately omitted — their lifecycle is in-flight contest churn, +/// not the settled-label collection this struct mirrors. +/// +/// DashPay profile (`dashpay_profile_*`) rides on every upsert when +/// the underlying [`IdentityEntry::dashpay_profile`] is `Some(_)`. The +/// `_present` flag plus the per-string nullable pointers let Swift +/// distinguish "no profile yet" (skip the row) from "profile present +/// with this field unset" (clear the column). All heap-allocated +/// strings are freed in [`free_identity_entry_ffi`]. #[repr(C)] pub struct IdentityEntryFFI { pub identity_id: [u8; 32], @@ -59,6 +74,73 @@ pub struct IdentityEntryFFI { /// link `PersistentIdentity.walletId` back to `PersistentWallet`. pub wallet_id_is_some: bool, pub wallet_id: [u8; 32], + /// Heap-allocated array of NUL-terminated UTF-8 C strings, one + /// per confirmed DPNS label on the underlying + /// [`IdentityEntry::dpns_names`]. Owned by this FFI struct; freed + /// in [`free_identity_entry_ffi`]. `null` when `dpns_names_count` + /// is 0. + /// + /// Inner pointers may individually be null when the source label + /// contained an interior NUL byte (unreachable in practice — DPNS + /// validation rejects them). Consumers must skip null inner + /// pointers. + pub dpns_names: *const *const c_char, + /// Number of entries pointed at by [`Self::dpns_names`] / + /// [`Self::dpns_names_acquired_at`]. The two arrays are always the + /// same length. + pub dpns_names_count: usize, + /// Parallel `u64` array of `acquired_at` Unix-millis timestamps; + /// `0` when the source `DpnsNameInfo.acquired_at` was `None`. + /// Same length as [`Self::dpns_names`]. Heap-allocated, freed in + /// [`free_identity_entry_ffi`]. `null` when count is 0. + pub dpns_names_acquired_at: *const u64, + /// `true` iff the underlying [`IdentityEntry::dashpay_profile`] + /// is `Some(_)`. When `false`, all `dashpay_profile_*` pointer + /// fields are null and the byte-array fields are zeroed — Swift + /// must skip the profile upsert entirely (changeset semantics: + /// `dashpay_profile: None` means "no update" rather than + /// "delete", matching the merge policy on the Rust side). + pub dashpay_profile_present: bool, + /// Heap-allocated NUL-terminated UTF-8 C string for the DashPay + /// profile's display name. `null` when the source field was + /// `None`. Owned by this FFI struct; freed in + /// [`free_identity_entry_ffi`]. Ignore unless + /// [`Self::dashpay_profile_present`] is `true`. + pub dashpay_profile_display_name: *const c_char, + /// Heap-allocated NUL-terminated UTF-8 C string for the DashPay + /// profile's bio. `null` when the source field was `None`. Owned + /// by this FFI struct; freed in [`free_identity_entry_ffi`]. + /// Ignore unless [`Self::dashpay_profile_present`] is `true`. + pub dashpay_profile_bio: *const c_char, + /// Heap-allocated NUL-terminated UTF-8 C string for the DashPay + /// profile's avatar URL. `null` when the source field was + /// `None`. Owned by this FFI struct; freed in + /// [`free_identity_entry_ffi`]. Ignore unless + /// [`Self::dashpay_profile_present`] is `true`. + pub dashpay_profile_avatar_url: *const c_char, + /// SHA-256 hash of the avatar image bytes (DIP-15 `avatarHash`). + /// Zeroed when the source `Option<[u8; 32]>` was `None` — gate + /// reads on [`Self::dashpay_profile_avatar_hash_present`] rather + /// than checking for an all-zero hash, since `[0u8; 32]` is a + /// valid (if cosmically unlikely) hash value. + pub dashpay_profile_avatar_hash: [u8; 32], + /// `true` iff the source `avatar_hash` was `Some(_)`. Disambiguates + /// "no hash" from "hash that happens to be all zeros". + pub dashpay_profile_avatar_hash_present: bool, + /// DHash perceptual fingerprint of the avatar image (DIP-15 + /// `avatarFingerprint`, 8 bytes / 64 bits). Zeroed when the source + /// `Option<[u8; 8]>` was `None` — gate reads on + /// [`Self::dashpay_profile_avatar_fingerprint_present`] rather + /// than checking for an all-zero fingerprint. + pub dashpay_profile_avatar_fingerprint: [u8; 8], + /// `true` iff the source `avatar_fingerprint` was `Some(_)`. + pub dashpay_profile_avatar_fingerprint_present: bool, + /// Heap-allocated NUL-terminated UTF-8 C string for the DashPay + /// profile's public message. `null` when the source field was + /// `None`. Owned by this FFI struct; freed in + /// [`free_identity_entry_ffi`]. Ignore unless + /// [`Self::dashpay_profile_present`] is `true`. + pub dashpay_profile_public_message: *const c_char, } /// Flat C mirror of [`IdentityKeyEntry`] for forwarding across FFI. @@ -151,6 +233,45 @@ pub struct IdentityKeyRemovalFFI { const _: [u8; 136] = [0u8; std::mem::size_of::()]; const _: [u8; 8] = [0u8; std::mem::align_of::()]; +// Compile-time guard for `IdentityEntryFFI`. Same rationale as the +// `IdentityKeyEntryFFI` guard above — the Swift side picks up the +// header layout via cbindgen, so a layout drift would manifest as a +// random `EXC_BAD_ACCESS` in the persistIdentities callback rather +// than a build error. Pin the expected size here so any reshape +// fails the cargo build first. +// +// Expected layout on 64-bit targets (all fields in declaration +// order under `#[repr(C)]`): +// +// 0..=31 identity_id [u8; 32] +// 32..=39 balance u64 +// 40..=47 revision u64 +// 48 identity_index_is_some bool +// 49..=51 (padding to 4) +// 52..=55 identity_index u32 +// 56 status u8 +// 57 wallet_id_is_some bool +// 58..=89 wallet_id [u8; 32] +// 90..=95 (padding to 8 for pointer alignment) +// 96..=103 dpns_names *const *const c_char +// 104..=111 dpns_names_count usize +// 112..=119 dpns_names_acquired_at *const u64 +// 120 dashpay_profile_present bool +// 121..=127 (padding to 8 for pointer alignment) +// 128..=135 dashpay_profile_display_name *const c_char +// 136..=143 dashpay_profile_bio *const c_char +// 144..=151 dashpay_profile_avatar_url *const c_char +// 152..=183 dashpay_profile_avatar_hash [u8; 32] +// 184 dashpay_profile_avatar_hash_present bool +// 185..=192 dashpay_profile_avatar_fingerprint [u8; 8] +// 193 dashpay_profile_avatar_fingerprint_present bool +// 194..=199 (padding to 8 for pointer alignment) +// 200..=207 dashpay_profile_public_message *const c_char +// +// Total size = 208, alignment = 8 (from u64 / pointer). +const _: [u8; 208] = [0u8; std::mem::size_of::()]; +const _: [u8; 8] = [0u8; std::mem::align_of::()]; + // --------------------------------------------------------------------------- // Conversions // --------------------------------------------------------------------------- @@ -158,9 +279,18 @@ const _: [u8; 8] = [0u8; std::mem::align_of::()]; impl IdentityEntryFFI { /// Copy an [`IdentityEntry`] into a fresh FFI struct. /// - /// Pure scalar layout now — no heap allocations to track. The - /// `free_identity_entry_ffi` helper survives only to keep the - /// callback shape symmetric with [`IdentityKeyEntryFFI`]. + /// Allocates two parallel heap arrays for the DPNS labels: + /// `dpns_names` (a boxed slice of `CString::into_raw` pointers) + /// and `dpns_names_acquired_at` (a boxed slice of timestamps). + /// Both are released by [`free_identity_entry_ffi`] which the + /// persister callsite calls after the Swift handler returns. + /// + /// When [`IdentityEntry::dashpay_profile`] is `Some(_)` the + /// per-string profile fields are heap-allocated `CString`s + /// (released by [`free_identity_entry_ffi`]) and the + /// `_present` flag is set to `true`. When the profile is + /// `None` every profile field is zero/null and the flag is + /// `false`. pub fn from_entry(entry: &IdentityEntry) -> Self { let (wallet_id_is_some, wallet_id) = match entry.wallet_id { Some(id) => (true, id), @@ -171,6 +301,14 @@ impl IdentityEntryFFI { None => (false, 0), }; + let (dpns_names, dpns_names_acquired_at, dpns_names_count) = + allocate_dpns_arrays(&entry.dpns_names); + + let profile_fields = match &entry.dashpay_profile { + Some(profile) => DashPayProfileFields::from_profile(profile), + None => DashPayProfileFields::absent(), + }; + Self { identity_id: entry.id.to_buffer(), balance: entry.balance, @@ -180,10 +318,136 @@ impl IdentityEntryFFI { status: status_discriminant(entry.status), wallet_id_is_some, wallet_id, + dpns_names, + dpns_names_count, + dpns_names_acquired_at, + dashpay_profile_present: profile_fields.present, + dashpay_profile_display_name: profile_fields.display_name, + dashpay_profile_bio: profile_fields.bio, + dashpay_profile_avatar_url: profile_fields.avatar_url, + dashpay_profile_avatar_hash: profile_fields.avatar_hash, + dashpay_profile_avatar_hash_present: profile_fields.avatar_hash_present, + dashpay_profile_avatar_fingerprint: profile_fields.avatar_fingerprint, + dashpay_profile_avatar_fingerprint_present: profile_fields.avatar_fingerprint_present, + dashpay_profile_public_message: profile_fields.public_message, + } + } +} + +/// Intermediate carrier for the DashPay profile slice of +/// [`IdentityEntryFFI`]. Exists so [`IdentityEntryFFI::from_entry`] +/// can build the per-string heap allocations in one place without +/// open-coding the `Option` → `CString::into_raw` ladder +/// inline. Every owned pointer in here is released by +/// [`free_identity_entry_ffi`] when the parent struct is freed. +struct DashPayProfileFields { + present: bool, + display_name: *const c_char, + bio: *const c_char, + avatar_url: *const c_char, + avatar_hash: [u8; 32], + avatar_hash_present: bool, + avatar_fingerprint: [u8; 8], + avatar_fingerprint_present: bool, + public_message: *const c_char, +} + +impl DashPayProfileFields { + /// Zeroed/null carrier used when the source profile is `None`. + fn absent() -> Self { + Self { + present: false, + display_name: ptr::null(), + bio: ptr::null(), + avatar_url: ptr::null(), + avatar_hash: [0u8; 32], + avatar_hash_present: false, + avatar_fingerprint: [0u8; 8], + avatar_fingerprint_present: false, + public_message: ptr::null(), + } + } + + /// Heap-allocate the C strings for a present profile. Strings + /// containing interior NUL bytes (impossible in practice — the + /// DashPay contract validation rejects them) become null + /// pointers so the rest of the struct stays well-formed; Swift + /// reads each pointer as nullable already. + fn from_profile(profile: &DashPayProfile) -> Self { + let (avatar_hash, avatar_hash_present) = match profile.avatar_hash { + Some(h) => (h, true), + None => ([0u8; 32], false), + }; + let (avatar_fingerprint, avatar_fingerprint_present) = match profile.avatar_fingerprint { + Some(f) => (f, true), + None => ([0u8; 8], false), + }; + Self { + present: true, + display_name: optional_c_string(profile.display_name.as_deref()), + bio: optional_c_string(profile.bio.as_deref()), + avatar_url: optional_c_string(profile.avatar_url.as_deref()), + avatar_hash, + avatar_hash_present, + avatar_fingerprint, + avatar_fingerprint_present, + public_message: optional_c_string(profile.public_message.as_deref()), } } } +/// Convert an `Option<&str>` into a heap-allocated `CString` raw +/// pointer (`null` for `None`). The returned pointer is released +/// with `CString::from_raw` inside [`free_identity_entry_ffi`]. +fn optional_c_string(s: Option<&str>) -> *const c_char { + match s { + Some(s) => match CString::new(s) { + Ok(c) => c.into_raw() as *const c_char, + Err(_) => ptr::null(), + }, + None => ptr::null(), + } +} + +/// Allocate the two parallel DPNS arrays carried on +/// [`IdentityEntryFFI`]. Returns `(labels, acquired_at, count)` — +/// both pointers null and count `0` when the source slice is empty. +/// +/// `labels` is a `Box<[*const c_char]>` of `CString::into_raw` +/// pointers — release each entry with `CString::from_raw` before +/// dropping the outer slice. `acquired_at` is a `Box<[u64]>` of +/// matching Unix-millis timestamps (`0` for `None`). The two slices +/// always have the same length so the caller indexes them in +/// lock-step. +/// +/// Inner labels that fail `CString::new` (interior NUL — unreachable +/// in practice given DPNS validation) become null entries so the +/// outer iteration on the Swift side stays index-aligned with the +/// timestamp array. +fn allocate_dpns_arrays( + names: &[platform_wallet::DpnsNameInfo], +) -> (*const *const c_char, *const u64, usize) { + if names.is_empty() { + return (ptr::null(), ptr::null(), 0); + } + let mut labels: Vec<*const c_char> = Vec::with_capacity(names.len()); + let mut acquired: Vec = Vec::with_capacity(names.len()); + for info in names { + let raw = match CString::new(info.label.clone()) { + Ok(s) => s.into_raw() as *const c_char, + // Interior NUL: skip the label but keep the slot so the + // timestamp array stays index-aligned. + Err(_) => ptr::null(), + }; + labels.push(raw); + acquired.push(info.acquired_at.unwrap_or(0)); + } + let count = labels.len(); + let labels_ptr = Box::into_raw(labels.into_boxed_slice()) as *const *const c_char; + let acquired_ptr = Box::into_raw(acquired.into_boxed_slice()) as *const u64; + (labels_ptr, acquired_ptr, count) +} + impl IdentityKeyEntryFFI { /// Copy an [`IdentityKeyEntry`] into a fresh FFI struct. The /// caller owns the heap-allocated `public_key_data_ptr` byte @@ -251,13 +515,90 @@ fn status_discriminant(status: IdentityStatus) -> u8 { // Destructors // --------------------------------------------------------------------------- -/// Release heap allocations owned by an [`IdentityEntryFFI`]. +/// Release heap allocations owned by an [`IdentityEntryFFI`] — +/// the DPNS label C-string array (each entry plus the outer boxed +/// slice), the parallel `acquired_at` timestamp array, and (when +/// [`IdentityEntryFFI::dashpay_profile_present`] is true) the +/// per-string profile C-strings. /// -/// Currently a no-op — `IdentityEntryFFI` no longer carries any -/// owned heap allocations after the label field was dropped. Kept -/// for callsite symmetry with [`free_identity_key_entry_ffi`] and -/// to leave the door open for future heap-owned fields. -pub unsafe fn free_identity_entry_ffi(_entry: &mut IdentityEntryFFI) {} +/// Idempotent: pointers are nulled, the `_present` flag is reset, +/// and counts are zeroed after release, so a second call is a no-op. +/// +/// # Safety +/// +/// `entry` must have been produced by [`IdentityEntryFFI::from_entry`] +/// and not previously freed. The pointers must reference allocations +/// owned by this struct — passing in pointers Swift owns or pointers +/// from a different allocator will corrupt the heap. +pub unsafe fn free_identity_entry_ffi(entry: &mut IdentityEntryFFI) { + if !entry.dpns_names.is_null() && entry.dpns_names_count > 0 { + // Reconstruct the boxed slice we created via `Box::into_raw` + // on a `Box<[*const c_char]>`, then walk every entry to + // release the per-label C-string before the outer slice + // drops. + let slice = unsafe { + std::slice::from_raw_parts_mut( + entry.dpns_names as *mut *const c_char, + entry.dpns_names_count, + ) + }; + for raw in slice.iter_mut() { + if !raw.is_null() { + let _ = unsafe { CString::from_raw(*raw as *mut c_char) }; + *raw = ptr::null(); + } + } + let _ = unsafe { Box::from_raw(slice as *mut [*const c_char]) }; + entry.dpns_names = ptr::null(); + } + if !entry.dpns_names_acquired_at.is_null() && entry.dpns_names_count > 0 { + let slice = unsafe { + std::slice::from_raw_parts_mut( + entry.dpns_names_acquired_at as *mut u64, + entry.dpns_names_count, + ) + }; + let _ = unsafe { Box::from_raw(slice as *mut [u64]) }; + entry.dpns_names_acquired_at = ptr::null(); + } + entry.dpns_names_count = 0; + + // Release each per-string DashPay profile allocation. The + // `_present` flag gates the whole section — when the source + // profile was `None`, every pointer is already null and there + // is nothing to free. We still walk each pointer individually + // because a profile can be present with one or more + // `Option` fields unset (and therefore null). + if entry.dashpay_profile_present { + free_optional_c_string(&mut entry.dashpay_profile_display_name); + free_optional_c_string(&mut entry.dashpay_profile_bio); + free_optional_c_string(&mut entry.dashpay_profile_avatar_url); + free_optional_c_string(&mut entry.dashpay_profile_public_message); + entry.dashpay_profile_avatar_hash = [0u8; 32]; + entry.dashpay_profile_avatar_hash_present = false; + entry.dashpay_profile_avatar_fingerprint = [0u8; 8]; + entry.dashpay_profile_avatar_fingerprint_present = false; + entry.dashpay_profile_present = false; + } +} + +/// Release a heap-allocated C string produced by +/// [`optional_c_string`] and null out the pointer in place. Idempotent +/// for `null` inputs so [`free_identity_entry_ffi`] stays a no-op on +/// double calls. +/// +/// # Safety +/// +/// The pointer must either be `null` or have been produced by +/// `CString::into_raw` on a `Box`-allocated `CString` (i.e. the +/// system allocator) — the same allocator `CString::from_raw` +/// reclaims from. +unsafe fn free_optional_c_string(slot: &mut *const c_char) { + if !slot.is_null() { + let _ = unsafe { CString::from_raw(*slot as *mut c_char) }; + *slot = ptr::null(); + } +} /// Release heap allocations owned by an [`IdentityKeyEntryFFI`] — /// the public-key data buffer and, when present, the derivation-path @@ -321,6 +662,123 @@ mod tests { assert_eq!(ffi.status, 2); // Active assert!(ffi.wallet_id_is_some); assert_eq!(ffi.wallet_id, [9u8; 32]); + assert!(ffi.dpns_names.is_null()); + assert!(ffi.dpns_names_acquired_at.is_null()); + assert_eq!(ffi.dpns_names_count, 0); + unsafe { free_identity_entry_ffi(&mut ffi) }; + } + + #[test] + fn test_identity_entry_ffi_with_dpns_names() { + use platform_wallet::DpnsNameInfo; + let entry = IdentityEntry { + id: Identifier::from([4u8; 32]), + balance: 0, + revision: 0, + identity_index: Some(0), + last_updated_balance_block_time: None, + last_synced_keys_block_time: None, + dpns_names: vec![ + DpnsNameInfo { + label: "alice".to_string(), + acquired_at: Some(1_700_000_000_000), + }, + DpnsNameInfo { + label: "alice2".to_string(), + acquired_at: None, + }, + ], + contested_dpns_names: Vec::new(), + status: IdentityStatus::Active, + wallet_id: None, + dashpay_profile: None, + dashpay_payments: Default::default(), + }; + let mut ffi = IdentityEntryFFI::from_entry(&entry); + assert_eq!(ffi.dpns_names_count, 2); + assert!(!ffi.dpns_names.is_null()); + assert!(!ffi.dpns_names_acquired_at.is_null()); + + // Read both labels back via the C-string API to validate the + // shape Swift is going to walk. + let labels: &[*const c_char] = + unsafe { std::slice::from_raw_parts(ffi.dpns_names, ffi.dpns_names_count) }; + let acquired: &[u64] = + unsafe { std::slice::from_raw_parts(ffi.dpns_names_acquired_at, ffi.dpns_names_count) }; + assert!(!labels[0].is_null()); + assert!(!labels[1].is_null()); + let s0 = unsafe { std::ffi::CStr::from_ptr(labels[0]) } + .to_str() + .unwrap(); + let s1 = unsafe { std::ffi::CStr::from_ptr(labels[1]) } + .to_str() + .unwrap(); + assert_eq!(s0, "alice"); + assert_eq!(s1, "alice2"); + assert_eq!(acquired[0], 1_700_000_000_000); + assert_eq!(acquired[1], 0); + + unsafe { free_identity_entry_ffi(&mut ffi) }; + assert!(ffi.dpns_names.is_null()); + assert!(ffi.dpns_names_acquired_at.is_null()); + assert_eq!(ffi.dpns_names_count, 0); + + // Idempotent: a second call must not double-free. + unsafe { free_identity_entry_ffi(&mut ffi) }; + } + + #[test] + fn test_identity_entry_ffi_with_dashpay_profile() { + use platform_wallet::DashPayProfile; + let entry = IdentityEntry { + id: Identifier::from([5u8; 32]), + balance: 0, + revision: 0, + identity_index: Some(1), + last_updated_balance_block_time: None, + last_synced_keys_block_time: None, + dpns_names: Vec::new(), + contested_dpns_names: Vec::new(), + status: IdentityStatus::Active, + wallet_id: None, + dashpay_profile: Some(DashPayProfile { + display_name: Some("Bob".to_string()), + bio: Some("Hello".to_string()), + avatar_url: Some("https://example.com/a.png".to_string()), + avatar_hash: Some([0xAB; 32]), + avatar_fingerprint: Some([0xCD; 8]), + public_message: None, + }), + dashpay_payments: Default::default(), + }; + let mut ffi = IdentityEntryFFI::from_entry(&entry); + assert!(ffi.dashpay_profile_present); + let display = unsafe { std::ffi::CStr::from_ptr(ffi.dashpay_profile_display_name) } + .to_str() + .unwrap(); + assert_eq!(display, "Bob"); + let bio = unsafe { std::ffi::CStr::from_ptr(ffi.dashpay_profile_bio) } + .to_str() + .unwrap(); + assert_eq!(bio, "Hello"); + let url = unsafe { std::ffi::CStr::from_ptr(ffi.dashpay_profile_avatar_url) } + .to_str() + .unwrap(); + assert_eq!(url, "https://example.com/a.png"); + assert!(ffi.dashpay_profile_avatar_hash_present); + assert_eq!(ffi.dashpay_profile_avatar_hash, [0xAB; 32]); + assert!(ffi.dashpay_profile_avatar_fingerprint_present); + assert_eq!(ffi.dashpay_profile_avatar_fingerprint, [0xCD; 8]); + assert!(ffi.dashpay_profile_public_message.is_null()); + + unsafe { free_identity_entry_ffi(&mut ffi) }; + assert!(!ffi.dashpay_profile_present); + assert!(ffi.dashpay_profile_display_name.is_null()); + assert!(ffi.dashpay_profile_bio.is_null()); + assert!(ffi.dashpay_profile_avatar_url.is_null()); + assert!(!ffi.dashpay_profile_avatar_hash_present); + assert!(!ffi.dashpay_profile_avatar_fingerprint_present); + // Idempotent — second call must not double-free. unsafe { free_identity_entry_ffi(&mut ffi) }; } diff --git a/packages/rs-platform-wallet-ffi/src/identity_sync.rs b/packages/rs-platform-wallet-ffi/src/identity_sync.rs new file mode 100644 index 00000000000..2b5a7331d5a --- /dev/null +++ b/packages/rs-platform-wallet-ffi/src/identity_sync.rs @@ -0,0 +1,537 @@ +//! FFI bindings for `PlatformWalletManager`'s per-identity token state +//! sync coordinator. +//! +//! Mirrors the shape of [`crate::platform_address_sync`]: lifecycle +//! controls (`start` / `stop` / `is_running` / `is_syncing` / +//! `last_sync_unix_seconds` / `set_interval` / `sync_now`), plus a +//! flat snapshot read API for the per-identity cache (single +//! identity or whole-store) with a paired free helper. +//! +//! All `*const` and `*mut` parameters follow the same convention as +//! the rest of this crate: pointers may be null, the function returns +//! [`PlatformWalletFFIResult::ErrorNullPointer`] if a non-optional +//! pointer is null, and detailed error context lands in the optional +//! `out_error` slot when supplied. + +use std::time::Duration; + +use platform_wallet::{IdentityTokenSyncInfo, IdentityTokenSyncState}; + +use crate::error::*; +use crate::handle::*; +use crate::runtime::runtime; + +/// Flattened per-(identity, token) row mirroring +/// [`IdentityTokenSyncInfo`]. +/// +/// `identity_id` is replicated onto each row so the whole-store +/// snapshot can be a single flat array even though the in-memory +/// cache is keyed by identity. +#[repr(C)] +#[derive(Debug, Clone, Copy)] +pub struct IdentityTokenSyncInfoFFI { + /// 32-byte identity that owns this row. + pub identity_id: [u8; 32], + /// 32-byte token id. + pub token_id: [u8; 32], + /// 32-byte data-contract id that issued the token. Currently a + /// zero-filled placeholder until token → contract resolution is + /// wired up on the watch registry. + pub contract_id: [u8; 32], + /// Latest balance reported by Platform. + pub balance: u64, + /// `IdentityContractNonce` this identity would use for the next + /// state transition against `contract_id`. Same value across + /// every row in this snapshot that shares an `(identity_id, + /// contract_id)` tuple. `0` means "not fetched yet" until + /// per-token contract resolution lands. + pub identity_contract_nonce: u64, +} + +impl IdentityTokenSyncInfoFFI { + fn from_state_row(row: &IdentityTokenSyncState, info: &IdentityTokenSyncInfo) -> Self { + Self { + identity_id: *row.identity_id.as_bytes(), + token_id: *info.token_id.as_bytes(), + contract_id: *info.contract_id.as_bytes(), + balance: info.balance, + identity_contract_nonce: info.identity_contract_nonce, + } + } +} + +/// Start the identity-token sync manager in the background. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_start( + handle: Handle, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let _entered = runtime().enter(); + manager.identity_sync_arc().start(); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Stop the identity-token sync manager if it is running. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_stop( + handle: Handle, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + manager.identity_sync().stop(); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Whether the identity-token sync background loop is running. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_is_running( + handle: Handle, + out_running: *mut bool, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if out_running.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + *out_running = manager.identity_sync().is_running(); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Whether an identity-token sync pass is currently in flight. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_is_syncing( + handle: Handle, + out_syncing: *mut bool, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if out_syncing.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + *out_syncing = manager.identity_sync().is_syncing(); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Unix seconds of the last completed identity-token sync pass for +/// the given identity, or 0 if that identity has never been synced. +/// +/// `identity_id_ptr` must point to a 32-byte identifier. The +/// last-sync timestamp is per-identity (not global) — different +/// identities may have different watermarks. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_last_sync_unix_seconds( + handle: Handle, + identity_id_ptr: *const u8, + out_last_sync_unix: *mut u64, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if identity_id_ptr.is_null() || out_last_sync_unix.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + let mut id_bytes = [0u8; 32]; + std::ptr::copy_nonoverlapping(identity_id_ptr, id_bytes.as_mut_ptr(), 32); + let identity_id = dpp::prelude::Identifier::from(id_bytes); + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + let value = runtime() + .block_on(async move { mgr.last_sync_unix_for_identity(&identity_id).await }); + *out_last_sync_unix = value.unwrap_or(0); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Set the background identity-token sync interval in seconds. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_set_interval( + handle: Handle, + interval_seconds: u64, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + manager + .identity_sync() + .set_interval(Duration::from_secs(interval_seconds)); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Run one identity-token sync pass across all registered wallets. +/// +/// Synchronous from the FFI caller's point of view — blocks the +/// calling thread until the pass completes. If a pass is already in +/// flight (e.g. fired by the background loop), returns `Success` +/// immediately without scheduling extra work; check `is_syncing` if +/// the caller needs to distinguish. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_sync_now( + handle: Handle, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + runtime().block_on(manager.identity_sync().sync_now()); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Snapshot the identity-token sync state for one identity. +/// +/// On success: +/// * `*out_rows` points to a heap-owned array of +/// [`IdentityTokenSyncInfoFFI`] of length `*out_rows_count`, +/// * `*out_last_sync_unix` is the per-identity last-sync timestamp +/// (`0` if never synced). +/// +/// If the identity has no cached state, `*out_rows` is set to null +/// and `*out_rows_count` to 0 (still `Success` — empty is not an +/// error). +/// +/// Free `*out_rows` with +/// [`platform_wallet_manager_identity_sync_state_free`]. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_state_for_identity( + handle: Handle, + identity_id_ptr: *const u8, + out_rows: *mut *mut IdentityTokenSyncInfoFFI, + out_rows_count: *mut usize, + out_last_sync_unix: *mut u64, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if identity_id_ptr.is_null() + || out_rows.is_null() + || out_rows_count.is_null() + || out_last_sync_unix.is_null() + { + return PlatformWalletFFIResult::ErrorNullPointer; + } + + let mut id_bytes = [0u8; 32]; + std::ptr::copy_nonoverlapping(identity_id_ptr, id_bytes.as_mut_ptr(), 32); + let identity_id = dpp::prelude::Identifier::from(id_bytes); + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + let row = runtime().block_on(async move { mgr.state_for_identity(&identity_id).await }); + match row { + Some(state) => { + let rows: Vec = state + .tokens + .iter() + .map(|info| IdentityTokenSyncInfoFFI::from_state_row(&state, info)) + .collect(); + let len = rows.len(); + let boxed = rows.into_boxed_slice(); + *out_rows = Box::into_raw(boxed) as *mut IdentityTokenSyncInfoFFI; + *out_rows_count = len; + *out_last_sync_unix = state.last_sync_unix; + } + None => { + *out_rows = std::ptr::null_mut(); + *out_rows_count = 0; + *out_last_sync_unix = 0; + } + } + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Snapshot the identity-token sync state for every cached identity +/// as a single flat array. +/// +/// `identity_id` is replicated on every row so callers can group on +/// it directly; ordering follows the BTreeMap iteration order over +/// `(identity_id, token_id)`. +/// +/// On success: +/// * `*out_rows` points to a heap-owned array of +/// [`IdentityTokenSyncInfoFFI`] of length `*out_rows_count`. +/// +/// Free `*out_rows` with +/// [`platform_wallet_manager_identity_sync_state_free`]. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_state_all( + handle: Handle, + out_rows: *mut *mut IdentityTokenSyncInfoFFI, + out_rows_count: *mut usize, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if out_rows.is_null() || out_rows_count.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + let snapshot = runtime().block_on(async move { mgr.all_state().await }); + let mut rows: Vec = Vec::new(); + for state in snapshot.values() { + for info in &state.tokens { + rows.push(IdentityTokenSyncInfoFFI::from_state_row(state, info)); + } + } + let len = rows.len(); + if len == 0 { + *out_rows = std::ptr::null_mut(); + *out_rows_count = 0; + } else { + let boxed = rows.into_boxed_slice(); + *out_rows = Box::into_raw(boxed) as *mut IdentityTokenSyncInfoFFI; + *out_rows_count = len; + } + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Free a heap-owned `IdentityTokenSyncInfoFFI` array returned by one +/// of the snapshot getters above. Safe to call with `(null, 0)` — +/// no-op. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_state_free( + rows: *mut IdentityTokenSyncInfoFFI, + count: usize, +) { + if rows.is_null() || count == 0 { + return; + } + let _ = Box::from_raw(std::ptr::slice_from_raw_parts_mut(rows, count)); +} + +/// Decode a `*const u8` of length `count * 32` into a `Vec`. +/// +/// `ptr` may be null when `count == 0`; otherwise it must point to a +/// contiguous buffer of `count * 32` bytes laid out as back-to-back +/// 32-byte identifiers. Used by the registry lifecycle calls below. +unsafe fn read_token_ids(ptr: *const u8, count: usize) -> Option> { + if count == 0 { + return Some(Vec::new()); + } + if ptr.is_null() { + return None; + } + let mut out = Vec::with_capacity(count); + for i in 0..count { + let mut buf = [0u8; 32]; + std::ptr::copy_nonoverlapping(ptr.add(i * 32), buf.as_mut_ptr(), 32); + out.push(dpp::prelude::Identifier::from(buf)); + } + Some(out) +} + +/// Register an identity with the token-sync registry. +/// +/// `identity_id_ptr` must point to a 32-byte identifier. +/// `token_ids_ptr` must point to `token_ids_count * 32` bytes (each +/// 32-byte chunk is one token id) — null is permitted only when +/// `token_ids_count == 0` (registers the identity with no watched +/// tokens). Idempotent: a second call replaces the row. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_register_identity( + handle: Handle, + identity_id_ptr: *const u8, + token_ids_ptr: *const u8, + token_ids_count: usize, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if identity_id_ptr.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + let mut id_bytes = [0u8; 32]; + std::ptr::copy_nonoverlapping(identity_id_ptr, id_bytes.as_mut_ptr(), 32); + let identity_id = dpp::prelude::Identifier::from(id_bytes); + + let Some(token_ids) = read_token_ids(token_ids_ptr, token_ids_count) else { + return PlatformWalletFFIResult::ErrorNullPointer; + }; + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + runtime().block_on(async move { mgr.register_identity(identity_id, token_ids).await }); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Unregister an identity from the token-sync registry. +/// +/// `identity_id_ptr` must point to a 32-byte identifier. Idempotent — +/// removing an unknown identity is a successful no-op. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_unregister_identity( + handle: Handle, + identity_id_ptr: *const u8, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if identity_id_ptr.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + let mut id_bytes = [0u8; 32]; + std::ptr::copy_nonoverlapping(identity_id_ptr, id_bytes.as_mut_ptr(), 32); + let identity_id = dpp::prelude::Identifier::from(id_bytes); + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + runtime().block_on(async move { mgr.unregister_identity(&identity_id).await }); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} + +/// Replace the watched-token list for an already-registered identity. +/// +/// `identity_id_ptr` must point to a 32-byte identifier. +/// `token_ids_ptr` must point to `token_ids_count * 32` bytes (each +/// 32-byte chunk is one token id) — null is permitted only when +/// `token_ids_count == 0` (clears the watched-token list, keeping the +/// row registered with no tokens). No-op on an unregistered identity +/// (returns `Success`); call `register_identity` first if you need +/// promotion semantics. +#[no_mangle] +pub unsafe extern "C" fn platform_wallet_manager_identity_sync_update_watched_tokens( + handle: Handle, + identity_id_ptr: *const u8, + token_ids_ptr: *const u8, + token_ids_count: usize, + out_error: *mut PlatformWalletFFIError, +) -> PlatformWalletFFIResult { + if identity_id_ptr.is_null() { + return PlatformWalletFFIResult::ErrorNullPointer; + } + let mut id_bytes = [0u8; 32]; + std::ptr::copy_nonoverlapping(identity_id_ptr, id_bytes.as_mut_ptr(), 32); + let identity_id = dpp::prelude::Identifier::from(id_bytes); + + let Some(token_ids) = read_token_ids(token_ids_ptr, token_ids_count) else { + return PlatformWalletFFIResult::ErrorNullPointer; + }; + + PLATFORM_WALLET_MANAGER_STORAGE + .with_item(handle, |manager| { + let mgr = manager.identity_sync_arc(); + runtime() + .block_on(async move { mgr.update_watched_tokens(identity_id, token_ids).await }); + PlatformWalletFFIResult::Success + }) + .unwrap_or_else(|| { + if !out_error.is_null() { + *out_error = PlatformWalletFFIError::new( + PlatformWalletFFIResult::ErrorInvalidHandle, + "Invalid manager handle", + ); + } + PlatformWalletFFIResult::ErrorInvalidHandle + }) +} diff --git a/packages/rs-platform-wallet-ffi/src/lib.rs b/packages/rs-platform-wallet-ffi/src/lib.rs index b85db786784..c0f3123b530 100644 --- a/packages/rs-platform-wallet-ffi/src/lib.rs +++ b/packages/rs-platform-wallet-ffi/src/lib.rs @@ -11,6 +11,7 @@ pub mod asset_lock; pub mod contact; +pub mod contact_persistence; pub mod contact_request; pub mod core_address_types; pub mod core_wallet; @@ -35,6 +36,7 @@ pub mod identity_persistence; pub mod identity_registration; pub mod identity_registration_funded_with_signer; pub mod identity_registration_with_signer; +pub mod identity_sync; pub mod identity_top_up; pub mod identity_transfer; pub mod identity_update; @@ -55,12 +57,14 @@ pub mod tokens; pub mod types; pub mod utils; pub mod wallet; +pub mod wallet_registration_persistence; pub mod wallet_restore_types; pub mod xpub_render; // Re-exports pub use asset_lock::*; pub use contact::*; +pub use contact_persistence::*; pub use contact_request::*; pub use core_address_types::*; pub use core_wallet::*; @@ -84,6 +88,7 @@ pub use identity_manager::*; pub use identity_persistence::*; pub use identity_registration_funded_with_signer::*; pub use identity_registration_with_signer::*; +pub use identity_sync::*; pub use identity_top_up::*; pub use identity_transfer::*; pub use identity_update::*; @@ -103,6 +108,7 @@ pub use tokens::*; pub use types::*; pub use utils::*; pub use wallet::*; +pub use wallet_registration_persistence::*; pub use wallet_restore_types::*; pub use xpub_render::*; diff --git a/packages/rs-platform-wallet-ffi/src/memory_explorer.rs b/packages/rs-platform-wallet-ffi/src/memory_explorer.rs index a06fbb2cd5f..1b9073e6e0c 100644 --- a/packages/rs-platform-wallet-ffi/src/memory_explorer.rs +++ b/packages/rs-platform-wallet-ffi/src/memory_explorer.rs @@ -4,13 +4,18 @@ //! Powers the iOS "Wallet Memory Explorer" view — a read-only dump of //! what Rust currently holds for a loaded wallet (managed identity ids, //! out-of-wallet/observed identity ids, gap-limit registration high -//! water mark, asset lock and token-balance counts). +//! water mark, asset lock counts). //! //! Mirrors the per-wallet `info.identity_manager.*` and -//! `info.tracked_asset_locks` / `info.token_balances` surface that -//! existing FFI calls already expose piecemeal — this module just -//! gives Swift one-shot enumerators so the explorer view can render -//! a snapshot without juggling multiple FFI handles. +//! `info.tracked_asset_locks` surface that existing FFI calls already +//! expose piecemeal — this module just gives Swift one-shot +//! enumerators so the explorer view can render a snapshot without +//! juggling multiple FFI handles. +//! +//! Token balance state is owned by +//! [`platform_wallet::IdentitySyncManager`] — query that via the +//! `platform_wallet_manager_identity_sync_state_*` family rather than +//! through this explorer. //! //! All entry points are read-only. Holding the wallet manager //! `blocking_read` guard is fine on the FFI thread (matches the @@ -41,9 +46,6 @@ pub struct PlatformWalletMemorySummaryFFI { /// Number of tracked asset locks the wallet currently holds in /// memory (`PlatformWalletInfo.tracked_asset_locks`). pub tracked_asset_locks_count: usize, - /// Number of `(identity_id, token_id) -> amount` entries on the - /// wallet (`PlatformWalletInfo.token_balances`). - pub token_balances_count: usize, } /// Identity lifecycle status mirror. @@ -257,7 +259,6 @@ pub unsafe extern "C" fn platform_wallet_get_in_memory_summary( .highest_registration_index(&wallet_id) .map_or(0u32, |i| i + 1); let tracked_asset_locks_count = info.tracked_asset_locks.len(); - let token_balances_count = info.token_balances.len(); unsafe { *out = PlatformWalletMemorySummaryFFI { @@ -265,7 +266,6 @@ pub unsafe extern "C" fn platform_wallet_get_in_memory_summary( watched_count, last_scanned_index, tracked_asset_locks_count, - token_balances_count, }; } PlatformWalletFFIResult::Success diff --git a/packages/rs-platform-wallet-ffi/src/persistence.rs b/packages/rs-platform-wallet-ffi/src/persistence.rs index a83215c0293..a1b5204ffe7 100644 --- a/packages/rs-platform-wallet-ffi/src/persistence.rs +++ b/packages/rs-platform-wallet-ffi/src/persistence.rs @@ -15,8 +15,8 @@ use key_wallet::wallet::Wallet; use key_wallet::{AddressInfo, Network}; use parking_lot::RwLock; use platform_wallet::changeset::{ - ClientStartState, ClientWalletStartState, Merge, PersistenceError, PlatformWalletChangeSet, - PlatformWalletPersistence, + AccountAddressPoolEntry, AccountRegistrationEntry, ClientStartState, ClientWalletStartState, + Merge, PersistenceError, PlatformWalletChangeSet, PlatformWalletPersistence, }; use platform_wallet::wallet::platform_wallet::WalletId; use platform_wallet::wallet::{PerAccountPlatformAddressState, PerWalletPlatformAddressState}; @@ -25,6 +25,9 @@ use std::ffi::CString; use std::os::raw::c_void; use std::slice; +use crate::contact_persistence::{ + free_contact_requests_ffi, ContactRequestFFI, ContactRequestRemovalFFI, +}; use crate::core_address_types::{AddressPoolTypeTagFFI, CoreAddressEntryFFI}; use crate::core_wallet_types::{free_wallet_changeset_ffi, WalletChangeSetFFI}; use crate::identity_persistence::{ @@ -33,6 +36,7 @@ use crate::identity_persistence::{ }; use crate::platform_address_types::AddressBalanceEntryFFI; use crate::token_persistence::{TokenBalanceRemovalFFI, TokenBalanceUpsertFFI}; +use crate::wallet_registration_persistence::AccountAddressPoolFFI; use crate::wallet_restore_types::{ AccountSpecFFI, AccountTypeTagFFI, IdentityKeyRestoreFFI, IdentityRestoreEntryFFI, LoadWalletListFreeFn, StandardAccountTypeTagFFI, WalletRestoreEntryFFI, @@ -113,16 +117,25 @@ pub struct PersistenceCallbacks { last_known_recent_block: u64, ) -> i32, >, - /// Called once per account when the account is added to a wallet. - /// Caller should upsert keyed by `(wallet_id, account spec)`. - /// Returns 0 on success. A non-zero return is propagated as a - /// `PersistenceError` from `store_account`, aborting the - /// caller's operation. - pub on_persist_account_fn: Option< + /// Called once per registration round with the array of accounts + /// being persisted. Each entry is the same flat + /// [`AccountSpecFFI`] shape the load callback returns, so the + /// receiver matches by `(type_tag, index, registration_index, + /// key_class, user_identity_id, friend_identity_id, standard_tag)` + /// and writes one row per spec. The pointer + every nested + /// `account_xpub_bytes` buffer are Rust-owned and live for the + /// callback window only — Swift must copy the bytes before the + /// call returns. + /// + /// Returns 0 on success. A non-zero return flips the round's + /// `success` flag to `false` so [`Self::on_changeset_end_fn`] + /// receives the rollback signal. + pub on_persist_account_registrations_fn: Option< unsafe extern "C" fn( context: *mut c_void, wallet_id: *const u8, - spec: *const AccountSpecFFI, + specs: *const AccountSpecFFI, + count: usize, ) -> i32, >, /// Invoked on [`FFIPersister::load`] to pull the persisted wallet @@ -150,14 +163,16 @@ pub struct PersistenceCallbacks { count: usize, ), >, - /// Called once per wallet at registration with network tag and - /// birth height. `network` uses the same discriminant as - /// `WalletRestoreEntryFFI.network` (0 = Mainnet, 1 = Testnet, - /// 2 = Devnet, 3 = Regtest). `birth_height` is the best estimate - /// of the block at which the wallet started; zero means - /// "scan from genesis / unknown". Returns 0 on success. A - /// non-zero return is propagated as a `PersistenceError` from - /// `store_wallet_metadata`, aborting the caller's operation. + /// Called once per registration round with the wallet's + /// network tag + birth height. `network` uses the same + /// discriminant as `WalletRestoreEntryFFI.network` (0 = Mainnet, + /// 1 = Testnet, 2 = Devnet, 3 = Regtest). `birth_height` is the + /// best estimate of the block at which the wallet started; zero + /// means "scan from genesis / unknown". + /// + /// Returns 0 on success. A non-zero return flips the round's + /// `success` flag to `false` so [`Self::on_changeset_end_fn`] + /// receives the rollback signal. pub on_persist_wallet_metadata_fn: Option< unsafe extern "C" fn( context: *mut c_void, @@ -166,21 +181,24 @@ pub struct PersistenceCallbacks { birth_height: u32, ) -> i32, >, - /// Called per account whenever its address pool content changes - /// (initial population, pool extension, `used` flip). The - /// `account` pointer identifies which `PersistentAccount` row to - /// link the addresses to (Swift matches by the same key used in - /// `on_persist_account_fn`). The addresses slice is contiguous - /// and Rust-owned; Swift must copy any string before returning. - /// Returns 0 on success. A non-zero return is propagated as a - /// `PersistenceError` from `store_account_addresses`, aborting - /// the caller's operation. - pub on_persist_account_addresses_fn: Option< + /// Called once per registration round with the array of address + /// pool snapshots. Each [`AccountAddressPoolFFI`] entry carries + /// the owning account spec (matched against the + /// [`Self::on_persist_account_registrations_fn`] entry that wrote + /// the row), the pool-type discriminant, and a contiguous slice + /// of [`CoreAddressEntryFFI`] rows for the pool. All pointers + /// (the entry array, every nested address slice, every nested + /// c-string) are Rust-owned and valid only for the callback + /// window — Swift must copy strings before returning. + /// + /// Returns 0 on success. A non-zero return flips the round's + /// `success` flag to `false` so [`Self::on_changeset_end_fn`] + /// receives the rollback signal. + pub on_persist_account_address_pools_fn: Option< unsafe extern "C" fn( context: *mut c_void, wallet_id: *const u8, - account: *const AccountSpecFFI, - addresses: *const CoreAddressEntryFFI, + pools: *const AccountAddressPoolFFI, count: usize, ) -> i32, >, @@ -217,9 +235,11 @@ pub struct PersistenceCallbacks { /// token_id) -> balance` upserts and `(identity_id, token_id)` /// tombstones. Swift maps upserts onto `PersistentTokenBalance` /// rows keyed by `(tokenId, identityId)` and removes rows for - /// every tombstone. The `watched` / `unwatched` portions of the - /// underlying changeset are not surfaced — see - /// [`crate::token_persistence`] for the rationale. + /// every tombstone. The watch list itself is no longer + /// changeset-replicated — it lives in the + /// [`platform_wallet::IdentitySyncManager`] in-memory cache and + /// is rehydrated from the SwiftData `PersistentTokenBalance` + /// rows on app start. pub on_persist_token_balances_fn: Option< unsafe extern "C" fn( context: *mut c_void, @@ -230,6 +250,36 @@ pub struct PersistenceCallbacks { removed_count: usize, ) -> i32, >, + /// Called with a flat `ContactChangeSet` projection — sent / + /// incoming / established contact requests in `upserts`, plus + /// parallel sent / incoming tombstone arrays. + /// + /// `ContactChangeSet` is a top-level (not per-identity) + /// changeset, but the callback is still wallet-scoped via + /// `wallet_id` so the Swift handler can resolve the network for + /// the rows it persists. + /// + /// The `established` map is projected as **two** rows per entry + /// (one with `is_outgoing == true`, one with `is_outgoing == + /// false`) covering the underlying outgoing+incoming + /// `ContactRequest` pair on `EstablishedContact`. The auto- + /// establishment contract on the Rust side drops any matching + /// pending entries when the contact is established (no separate + /// tombstone is emitted), so the Swift unique constraint upserts + /// these rows in place over any prior pending row for the same + /// `(owner, contact, direction)`. + pub on_persist_contacts_fn: Option< + unsafe extern "C" fn( + context: *mut c_void, + wallet_id: *const u8, + upserts_ptr: *const ContactRequestFFI, + upserts_count: usize, + removed_sent_ptr: *const ContactRequestRemovalFFI, + removed_sent_count: usize, + removed_incoming_ptr: *const ContactRequestRemovalFFI, + removed_incoming_count: usize, + ) -> i32, + >, } // SAFETY: The context pointer is managed by the FFI caller who must ensure @@ -274,6 +324,106 @@ impl PlatformWalletPersistence for FFIPersister { } let mut round_success = true; + // Wallet-registration metadata. Fires at most once per round + // (registration emits the entry; subsequent rounds carry + // `wallet_metadata: None` so no callback fires). + if let Some(meta) = changeset.wallet_metadata.as_ref() { + if let Some(cb) = self.callbacks.on_persist_wallet_metadata_fn { + let network_tag = network_tag_for(meta.network); + let result = unsafe { + cb( + self.callbacks.context, + wallet_id.as_ptr(), + network_tag, + meta.birth_height, + ) + }; + if result != 0 { + eprintln!( + "Wallet metadata persistence callback returned error code {}", + result + ); + round_success = false; + } + } + } + + // Per-account registration entries. The `_xpub_bytes_storage` + // Vec keeps the bincoded xpub buffers alive for the callback + // window — `AccountSpecFFI.account_xpub_bytes` borrows into + // it. Same lifetime discipline the prior dedicated callback + // used. + if !changeset.account_registrations.is_empty() { + if let Some(cb) = self.callbacks.on_persist_account_registrations_fn { + let entries = &changeset.account_registrations; + match build_account_specs_for_callback(entries) { + Ok((specs, _xpub_bytes_storage)) => { + let result = unsafe { + cb( + self.callbacks.context, + wallet_id.as_ptr(), + specs.as_ptr(), + specs.len(), + ) + }; + // Force the spec / byte buffers to live + // until after the callback even though + // their drop happens on scope exit anyway. + drop(specs); + drop(_xpub_bytes_storage); + if result != 0 { + eprintln!( + "Account registrations persistence callback returned error code {}", + result + ); + round_success = false; + } + } + Err(e) => { + eprintln!("Failed to encode account registration specs: {}", e); + round_success = false; + } + } + } + } + + // Per-account address-pool snapshots. The `_string_storage` + // Vec keeps every owned `CString` alive for the callback + // window; `_address_storage` keeps every per-pool + // `Vec` alive (each pool holds pointers + // into a sibling string buffer); `_pools` is the heap-array + // the callback iterates over. + if !changeset.account_address_pools.is_empty() { + if let Some(cb) = self.callbacks.on_persist_account_address_pools_fn { + match build_address_pools_for_callback(&changeset.account_address_pools) { + Ok((pools, _address_storage, _string_storage)) => { + let result = unsafe { + cb( + self.callbacks.context, + wallet_id.as_ptr(), + pools.as_ptr(), + pools.len(), + ) + }; + drop(pools); + drop(_address_storage); + drop(_string_storage); + if result != 0 { + eprintln!( + "Account address pools persistence callback returned error code {}", + result + ); + round_success = false; + } + } + Err(e) => { + eprintln!("Failed to encode account address pool entries: {}", e); + round_success = false; + } + } + } + } + // Send incremental address balance updates before merging. if let Some(ref addr_cs) = changeset.platform_addresses { if let Some(cb) = self.callbacks.on_persist_address_balances_fn { @@ -465,6 +615,115 @@ impl PlatformWalletPersistence for FFIPersister { } } + // Send DashPay contact-request changeset. + // + // The flat upsert array is built by walking every source + // bucket on the changeset: + // - `sent_requests` ⇒ one outgoing row per entry + // - `incoming_requests` ⇒ one incoming row per entry + // - `established` ⇒ two rows per entry (the underlying + // outgoing + incoming `ContactRequest` on + // `EstablishedContact`) so the Swift uniqueness key + // `(network, owner, contact, is_outgoing)` upserts both + // directions cleanly. The auto-establishment contract on + // the Rust side drops any matching `sent_requests` / + // `incoming_requests` entry when promoting to established, + // so this projection never produces a duplicate row in a + // single round. + // + // Removal arrays mirror the changeset's two tombstone fields + // 1:1 — Swift deletes rows by `(owner, contact, is_outgoing)` + // with the direction implied by which bucket they came from. + if let Some(ref contacts_cs) = changeset.contacts { + if let Some(cb) = self.callbacks.on_persist_contacts_fn { + let mut upserts: Vec = Vec::with_capacity( + contacts_cs.sent_requests.len() + + contacts_cs.incoming_requests.len() + + contacts_cs.established.len() * 2, + ); + for (key, entry) in &contacts_cs.sent_requests { + upserts.push(ContactRequestFFI::from_outgoing( + key.owner_id.to_buffer(), + key.recipient_id.to_buffer(), + &entry.request, + )); + } + for (key, entry) in &contacts_cs.incoming_requests { + upserts.push(ContactRequestFFI::from_incoming( + key.owner_id.to_buffer(), + key.sender_id.to_buffer(), + &entry.request, + )); + } + for (key, established) in &contacts_cs.established { + upserts.push(ContactRequestFFI::from_outgoing( + key.owner_id.to_buffer(), + key.recipient_id.to_buffer(), + &established.outgoing_request, + )); + upserts.push(ContactRequestFFI::from_incoming( + key.owner_id.to_buffer(), + key.recipient_id.to_buffer(), + &established.incoming_request, + )); + } + let removed_sent: Vec = contacts_cs + .removed_sent + .iter() + .map(|key| ContactRequestRemovalFFI { + owner_id: key.owner_id.to_buffer(), + contact_id: key.recipient_id.to_buffer(), + }) + .collect(); + let removed_incoming: Vec = contacts_cs + .removed_incoming + .iter() + .map(|key| ContactRequestRemovalFFI { + owner_id: key.owner_id.to_buffer(), + contact_id: key.sender_id.to_buffer(), + }) + .collect(); + if !upserts.is_empty() || !removed_sent.is_empty() || !removed_incoming.is_empty() { + let result = unsafe { + cb( + self.callbacks.context, + wallet_id.as_ptr(), + if upserts.is_empty() { + std::ptr::null() + } else { + upserts.as_ptr() + }, + upserts.len(), + if removed_sent.is_empty() { + std::ptr::null() + } else { + removed_sent.as_ptr() + }, + removed_sent.len(), + if removed_incoming.is_empty() { + std::ptr::null() + } else { + removed_incoming.as_ptr() + }, + removed_incoming.len(), + ) + }; + // Release every heap-allocated payload before the + // outer Vec drops its storage. + if !upserts.is_empty() { + unsafe { free_contact_requests_ffi(upserts.as_mut_ptr(), upserts.len()) }; + } + if result != 0 { + eprintln!( + "Contact persistence callback returned error code {}", + result + ); + round_success = false; + } + } + } + } + // Send sync state updates. if let Some(ref addr_cs) = changeset.platform_addresses { if let Some(cb) = self.callbacks.on_persist_sync_state_fn { @@ -586,163 +845,6 @@ impl PlatformWalletPersistence for FFIPersister { } Ok(out) } - - fn store_account( - &self, - wallet_id: WalletId, - account_type: &AccountType, - account_xpub: &ExtendedPubKey, - ) -> Result<(), PersistenceError> { - let Some(cb) = self.callbacks.on_persist_account_fn else { - return Ok(()); - }; - let xpub_bytes = bincode::encode_to_vec(account_xpub, config::standard()) - .map_err(|e| format!("failed to encode account xpub: {}", e))?; - let spec = build_account_spec_ffi(account_type, &xpub_bytes); - let result = unsafe { cb(self.callbacks.context, wallet_id.as_ptr(), &spec) }; - if result != 0 { - return Err(format!( - "Persistence account callback returned error code {}", - result - ) - .into()); - } - Ok(()) - } - - fn store_account_addresses( - &self, - wallet_id: WalletId, - account_type: &AccountType, - pool_type: AddressPoolType, - addresses: &[AddressInfo], - ) -> Result<(), PersistenceError> { - let Some(cb) = self.callbacks.on_persist_account_addresses_fn else { - return Ok(()); - }; - if addresses.is_empty() { - return Ok(()); - } - - let pool_tag = match pool_type { - AddressPoolType::External => AddressPoolTypeTagFFI::External, - AddressPoolType::Internal => AddressPoolTypeTagFFI::Internal, - AddressPoolType::Absent => AddressPoolTypeTagFFI::Absent, - AddressPoolType::AbsentHardened => AddressPoolTypeTagFFI::AbsentHardened, - } as u8; - - // Whether the address pool belongs to a DIP-17 PlatformPayment - // account. The addresses themselves are the same (P2PKH / P2SH - // hashes derived from the wallet), but Platform Payment - // addresses are rendered as DIP-0018 bech32m (`dash1…` / - // `tdash1…`) rather than the base58check Core form. - let is_platform_payment = matches!(account_type, AccountType::PlatformPayment { .. }); - - // Build owned CStrings for every (address, path) pair so they - // outlive the callback window. `entries` borrows the pointers. - let mut owned_strings: Vec = Vec::with_capacity(addresses.len() * 2); - let mut entries: Vec = Vec::with_capacity(addresses.len()); - for info in addresses { - // Pick the right display encoding based on whether this - // address belongs to a PlatformPayment pool. If the - // `PlatformAddress` conversion fails (only supports P2PKH - // and P2SH), fall back to the base58check form so the - // address is still surfaced to the caller. - let rendered_address = if is_platform_payment { - let network = *info.address.network(); - let converted: Result = - PlatformAddress::try_from(info.address.clone()); - converted - .map(|p| p.to_bech32m_string(network)) - .unwrap_or_else(|_| info.address.to_string()) - } else { - info.address.to_string() - }; - let address_c = CString::new(rendered_address) - .map_err(|e| format!("address contained NUL byte: {}", e))?; - let path_c = CString::new(info.path.to_string()) - .map_err(|e| format!("derivation path contained NUL byte: {}", e))?; - let address_ptr = address_c.as_ptr(); - let path_ptr = path_c.as_ptr(); - owned_strings.push(address_c); - owned_strings.push(path_c); - - let mut public_key = [0u8; 33]; - let has_public_key = match &info.public_key { - Some(PublicKeyType::ECDSA(bytes)) if bytes.len() == 33 => { - public_key.copy_from_slice(bytes); - true - } - _ => false, - }; - - entries.push(CoreAddressEntryFFI { - public_key, - has_public_key, - pool_type_tag: pool_tag, - address_index: info.index, - is_used: info.used, - balance: info.balance, - address_base58: address_ptr, - derivation_path: path_ptr, - }); - } - - // Identify the account to Swift using the same flat-spec shape - // `on_persist_account_fn` uses (minus the per-account xpub — - // irrelevant for the address-write path). - let empty_xpub: &[u8] = &[]; - let spec = build_account_spec_ffi(account_type, empty_xpub); - - let result = unsafe { - cb( - self.callbacks.context, - wallet_id.as_ptr(), - &spec, - entries.as_ptr(), - entries.len(), - ) - }; - // Force `owned_strings` to live until after the callback. - drop(owned_strings); - - if result != 0 { - return Err(format!( - "Persistence account_addresses callback returned error code {}", - result - ) - .into()); - } - Ok(()) - } - - fn store_wallet_metadata( - &self, - wallet_id: WalletId, - network: Network, - birth_height: u32, - ) -> Result<(), PersistenceError> { - let Some(cb) = self.callbacks.on_persist_wallet_metadata_fn else { - return Ok(()); - }; - let network_tag = network_tag_for(network); - let result = unsafe { - cb( - self.callbacks.context, - wallet_id.as_ptr(), - network_tag, - birth_height, - ) - }; - if result != 0 { - return Err(format!( - "Persistence wallet_metadata callback returned error code {}", - result - ) - .into()); - } - Ok(()) - } } /// Reverse of [`network_from_tag`] — keeps the discriminant in sync @@ -864,6 +966,165 @@ fn build_account_spec_ffi(account_type: &AccountType, xpub_bytes: &[u8]) -> Acco spec } +/// Build the `Vec` array for +/// `on_persist_account_registrations_fn` plus the parallel +/// `Vec>` of bincoded xpub byte buffers each spec borrows +/// from. The two Vecs share lifetime — caller drops both after the +/// callback returns. +fn build_account_specs_for_callback( + entries: &[AccountRegistrationEntry], +) -> Result<(Vec, Vec>), String> { + // Pre-encode every xpub once so the spec slot can borrow the + // pointer + length without a self-referential lifetime trick. + let xpub_buffers: Vec> = entries + .iter() + .map(|entry| { + bincode::encode_to_vec(entry.account_xpub, config::standard()) + .map_err(|e| format!("failed to encode account xpub: {}", e)) + }) + .collect::>()?; + let specs: Vec = entries + .iter() + .zip(xpub_buffers.iter()) + .map(|(entry, bytes)| build_account_spec_ffi(&entry.account_type, bytes)) + .collect(); + Ok((specs, xpub_buffers)) +} + +/// Build the `Vec` array for +/// `on_persist_account_address_pools_fn`. +/// +/// Returns three parallel Vecs whose lifetimes are tied together: +/// 1. `Vec` — the heap-array the callback +/// iterates over. Each entry's `addresses_ptr` borrows into one +/// of the inner Vecs from (2). +/// 2. `Vec>` — one inner Vec per pool, +/// holding the pool's address entries. Each entry's c-string +/// pointers borrow into (3). +/// 3. `Vec` — owned c-string storage for every (address, +/// derivation_path) pair across all pools. +/// +/// Caller must keep all three alive until after the FFI callback +/// returns. Mirrors the lifetime discipline the prior dedicated +/// `store_account_addresses` impl used; same forgiveness on +/// PlatformAddress conversion failures (falls back to base58check). +#[allow(clippy::type_complexity)] +fn build_address_pools_for_callback( + entries: &[AccountAddressPoolEntry], +) -> Result< + ( + Vec, + Vec>, + Vec, + ), + String, +> { + // Owned string pool — every (address, path) c-string borrowed by + // every CoreAddressEntryFFI lives in this Vec until callback end. + let mut owned_strings: Vec = Vec::new(); + // Per-pool address-entry storage. Indexed parallel to the + // returned `pools` Vec; pool i's `addresses_ptr` points at + // `address_storage[i].as_ptr()`. + let mut address_storage: Vec> = Vec::with_capacity(entries.len()); + let mut pools: Vec = Vec::with_capacity(entries.len()); + + for entry in entries { + let pool_tag = match entry.pool_type { + AddressPoolType::External => AddressPoolTypeTagFFI::External, + AddressPoolType::Internal => AddressPoolTypeTagFFI::Internal, + AddressPoolType::Absent => AddressPoolTypeTagFFI::Absent, + AddressPoolType::AbsentHardened => AddressPoolTypeTagFFI::AbsentHardened, + } as u8; + + let is_platform_payment = matches!(entry.account_type, AccountType::PlatformPayment { .. }); + + let mut pool_entries: Vec = Vec::with_capacity(entry.addresses.len()); + for info in &entry.addresses { + let entry_ffi = build_core_address_entry_ffi( + info, + pool_tag, + is_platform_payment, + &mut owned_strings, + )?; + pool_entries.push(entry_ffi); + } + + // Account spec borrows an empty xpub slice — the + // address-pool callback receiver does not need the xpub + // (it matches by the same identifier subset + // `on_persist_account_registrations_fn` uses). + let empty_xpub: &[u8] = &[]; + let spec = build_account_spec_ffi(&entry.account_type, empty_xpub); + + // Build the FFI struct after the inner Vec is finalized so + // the pointer is stable. + let addresses_ptr = pool_entries.as_ptr(); + let addresses_count = pool_entries.len(); + address_storage.push(pool_entries); + + pools.push(AccountAddressPoolFFI { + account: spec, + pool_type_tag: pool_tag, + addresses_ptr, + addresses_count, + }); + } + + Ok((pools, address_storage, owned_strings)) +} + +/// Build a single `CoreAddressEntryFFI` from an `AddressInfo`, +/// pushing the owned (address, path) c-strings into `owned_strings` +/// so they outlive the callback window. +fn build_core_address_entry_ffi( + info: &AddressInfo, + pool_type_tag: u8, + is_platform_payment: bool, + owned_strings: &mut Vec, +) -> Result { + // Pick the right display encoding. PlatformPayment pools render + // as DIP-0018 bech32m; everything else uses base58check. If the + // PlatformAddress conversion fails (only P2PKH / P2SH supported) + // fall back to base58check so the address still surfaces. + let rendered_address = if is_platform_payment { + let network = *info.address.network(); + let converted: Result = PlatformAddress::try_from(info.address.clone()); + converted + .map(|p| p.to_bech32m_string(network)) + .unwrap_or_else(|_| info.address.to_string()) + } else { + info.address.to_string() + }; + let address_c = + CString::new(rendered_address).map_err(|e| format!("address contained NUL byte: {}", e))?; + let path_c = CString::new(info.path.to_string()) + .map_err(|e| format!("derivation path contained NUL byte: {}", e))?; + let address_ptr = address_c.as_ptr(); + let path_ptr = path_c.as_ptr(); + owned_strings.push(address_c); + owned_strings.push(path_c); + + let mut public_key = [0u8; 33]; + let has_public_key = match &info.public_key { + Some(PublicKeyType::ECDSA(bytes)) if bytes.len() == 33 => { + public_key.copy_from_slice(bytes); + true + } + _ => false, + }; + + Ok(CoreAddressEntryFFI { + public_key, + has_public_key, + pool_type_tag, + address_index: info.index, + is_used: info.used, + balance: info.balance, + address_base58: address_ptr, + derivation_path: path_ptr, + }) +} + /// RAII drop-guard that invokes the paired free callback on exit, so /// any error path through `FFIPersister::load` still returns memory /// to Swift. diff --git a/packages/rs-platform-wallet-ffi/src/token_persistence.rs b/packages/rs-platform-wallet-ffi/src/token_persistence.rs index 89c1bf4b406..e4c7ded726a 100644 --- a/packages/rs-platform-wallet-ffi/src/token_persistence.rs +++ b/packages/rs-platform-wallet-ffi/src/token_persistence.rs @@ -2,18 +2,18 @@ //! [`TokenBalanceChangeSet`](platform_wallet::changeset::TokenBalanceChangeSet) //! out of [`FFIPersister`](crate::persistence::FFIPersister) to Swift. //! -//! Mirrors the shape of the (`identity_id`, `token_id`) keyed -//! `BTreeMap<(Identifier, Identifier), TokenAmount>` on -//! `PlatformWalletInfo.token_balances`. Swift maps each upsert onto a -//! `PersistentTokenBalance` row keyed by `(tokenId, identityId)` and -//! drops rows for every removal. +//! Mirrors the shape of the `(identity_id, token_id) -> balance` +//! changeset emitted by +//! [`IdentitySyncManager::sync_now`](platform_wallet::IdentitySyncManager). +//! Swift maps each upsert onto a `PersistentTokenBalance` row keyed +//! by `(tokenId, identityId)` and drops rows for every removal. //! -//! The `watched` / `unwatched` portions of the changeset are -//! intentionally not surfaced over FFI today: there is no Swift-side -//! schema for the watch registry — the sync driver re-watches on every -//! tick from the union of (local identities x known tokens), so the -//! registry is reconstructed in-memory from that source of truth -//! rather than persisted independently. +//! The watch list itself is not part of this projection — the +//! per-identity registry lives in the manager's in-memory cache and +//! is rehydrated on app start from whatever the Swift side passes to +//! `platform_wallet_manager_identity_sync_register_identity` / +//! `_update_watched_tokens`. Persisted balance rows are the only +//! durable record carried across launches. /// Flat C mirror of one `(identity_id, token_id) -> balance` row from /// `TokenBalanceChangeSet.balances`. diff --git a/packages/rs-platform-wallet-ffi/src/tokens/group_queries.rs b/packages/rs-platform-wallet-ffi/src/tokens/group_queries.rs index f93b3a76d90..36bfc3ecfa3 100644 --- a/packages/rs-platform-wallet-ffi/src/tokens/group_queries.rs +++ b/packages/rs-platform-wallet-ffi/src/tokens/group_queries.rs @@ -20,7 +20,8 @@ use std::os::raw::c_char; use dpp::group::group_action_status::GroupActionStatus; use dpp::tokens::emergency_action::TokenEmergencyAction; use platform_wallet::wallet::tokens::{ - GroupActionEntry, GroupActionParams, GroupActionSignerEntry, + group_action_signers_external, pending_group_actions_external, GroupActionEntry, + GroupActionParams, GroupActionSignerEntry, }; use serde_json::{json, Value}; @@ -303,17 +304,17 @@ pub unsafe extern "C" fn platform_wallet_token_pending_group_actions( PLATFORM_WALLET_STORAGE .with_item(wallet_handle, |wallet| { - let token_wallet = wallet.tokens().clone(); + let sdk = wallet.sdk_arc(); let result = block_on_worker(async move { - token_wallet - .pending_group_actions_external( - contract_id, - group_contract_position, - status_enum, - start_at, - limit_opt, - ) - .await + pending_group_actions_external( + sdk.as_ref(), + contract_id, + group_contract_position, + status_enum, + start_at, + limit_opt, + ) + .await }); match result { Ok(entries) => { @@ -412,16 +413,16 @@ pub unsafe extern "C" fn platform_wallet_token_group_action_signers( PLATFORM_WALLET_STORAGE .with_item(wallet_handle, |wallet| { - let token_wallet = wallet.tokens().clone(); + let sdk = wallet.sdk_arc(); let result = block_on_worker(async move { - token_wallet - .group_action_signers_external( - contract_id, - group_contract_position, - status_enum, - action_id_decoded, - ) - .await + group_action_signers_external( + sdk.as_ref(), + contract_id, + group_contract_position, + status_enum, + action_id_decoded, + ) + .await }); match result { Ok(entries) => { diff --git a/packages/rs-platform-wallet-ffi/src/tokens/mod.rs b/packages/rs-platform-wallet-ffi/src/tokens/mod.rs index 78a434cfaae..9c08261d563 100644 --- a/packages/rs-platform-wallet-ffi/src/tokens/mod.rs +++ b/packages/rs-platform-wallet-ffi/src/tokens/mod.rs @@ -24,7 +24,6 @@ pub mod pause; pub mod purchase; pub mod resume; pub mod set_price; -pub mod sync; pub mod transfer; pub mod unfreeze; pub mod update_config; @@ -39,7 +38,6 @@ pub use pause::*; pub use purchase::*; pub use resume::*; pub use set_price::*; -pub use sync::*; pub use transfer::*; pub use unfreeze::*; pub use update_config::*; diff --git a/packages/rs-platform-wallet-ffi/src/tokens/sync.rs b/packages/rs-platform-wallet-ffi/src/tokens/sync.rs deleted file mode 100644 index 36f9801dbb8..00000000000 --- a/packages/rs-platform-wallet-ffi/src/tokens/sync.rs +++ /dev/null @@ -1,97 +0,0 @@ -//! FFI binding for [`TokenWallet::watch`] + [`TokenWallet::sync`]. -//! -//! Watching and syncing are wallet-scope bookkeeping (no signer, no -//! state transition) — the call just registers `(identity_id, -//! token_id)` pairs in the in-memory watch registry and then queries -//! Platform per identity for the matching balances. The resulting -//! `TokenBalanceChangeSet` flows through the persister, surfacing as -//! `on_persist_token_balances_fn` on the Swift side. -//! -//! This is the single entry point Swift needs to populate -//! `PersistentTokenBalance` rows: it ships in the `(identity, token)` -//! pairs the UI cares about, Rust does the watch + Platform fetch + -//! changeset emission, and the persister callback writes them to -//! SwiftData. - -use std::slice; - -use crate::error::*; -use crate::handle::*; -use crate::runtime::block_on_worker; -use crate::token_persistence::TokenBalanceUpsertFFI; - -/// Watch every `(identity_id, token_id)` pair in `pairs`, then run a -/// single Platform sync round to refresh the cached balances. -/// -/// The persister callback (`on_persist_token_balances_fn`) fires once -/// the sync round completes with the resulting upsert / removal lists. -/// -/// `pairs` reuses [`TokenBalanceUpsertFFI`] for its layout — the -/// 32-byte identity id + 32-byte token id — and ignores `balance`. We -/// reuse the type rather than introducing a near-identical -/// `TokenBalancePairFFI` so the Swift side can share the same struct -/// for input + persist callbacks. Pass `pairs_count = 0` to skip the -/// watch step (sync alone). -/// -/// # Safety -/// - `wallet_handle` must come from the platform-wallet handle registry. -/// - `pairs` must be either NULL or point at exactly `pairs_count` -/// readable [`TokenBalanceUpsertFFI`] entries. -/// - `out_error` may be NULL. -#[no_mangle] -pub unsafe extern "C" fn platform_wallet_token_watch_and_sync( - wallet_handle: Handle, - pairs: *const TokenBalanceUpsertFFI, - pairs_count: usize, - out_error: *mut PlatformWalletFFIError, -) -> PlatformWalletFFIResult { - let pair_slice: &[TokenBalanceUpsertFFI] = if pairs.is_null() || pairs_count == 0 { - &[] - } else { - unsafe { slice::from_raw_parts(pairs, pairs_count) } - }; - - // Materialize the watch list now while we hold the &[u8; 32] view - // — the async block below can't borrow from this stack frame. - let watch_pairs: Vec<(dpp::prelude::Identifier, dpp::prelude::Identifier)> = pair_slice - .iter() - .map(|p| { - ( - dpp::prelude::Identifier::from(p.identity_id), - dpp::prelude::Identifier::from(p.token_id), - ) - }) - .collect(); - - PLATFORM_WALLET_STORAGE - .with_item(wallet_handle, |wallet| { - let token_wallet = wallet.tokens().clone(); - let result = block_on_worker(async move { - for (identity_id, token_id) in &watch_pairs { - token_wallet.watch(*identity_id, *token_id).await; - } - token_wallet.sync().await - }); - match result { - Ok(_) => PlatformWalletFFIResult::Success, - Err(e) => { - if !out_error.is_null() { - *out_error = PlatformWalletFFIError::new( - PlatformWalletFFIResult::ErrorWalletOperation, - format!("token_watch_and_sync failed: {e}"), - ); - } - PlatformWalletFFIResult::ErrorWalletOperation - } - } - }) - .unwrap_or_else(|| { - if !out_error.is_null() { - *out_error = PlatformWalletFFIError::new( - PlatformWalletFFIResult::ErrorInvalidHandle, - "Invalid platform-wallet handle", - ); - } - PlatformWalletFFIResult::ErrorInvalidHandle - }) -} diff --git a/packages/rs-platform-wallet-ffi/src/wallet_registration_persistence.rs b/packages/rs-platform-wallet-ffi/src/wallet_registration_persistence.rs new file mode 100644 index 00000000000..cd6c488d91c --- /dev/null +++ b/packages/rs-platform-wallet-ffi/src/wallet_registration_persistence.rs @@ -0,0 +1,85 @@ +//! C-compatible types for the wallet-registration round persistence +//! callbacks (`on_persist_account_registrations_fn`, +//! `on_persist_account_address_pools_fn`). +//! +//! These ride on `FFIPersister::store(wallet_id, changeset)` — the +//! whole-changeset entry point — rather than dedicated trait methods, +//! so the registration round (metadata + per-account specs + per-pool +//! snapshots) is one atomic round from the backend's perspective. +//! +//! The per-account-spec callback reuses [`AccountSpecFFI`] from +//! [`crate::wallet_restore_types`] verbatim — the same flat shape +//! Swift already consumes on the load path, so no new account-shape +//! mirror lands on the Swift side. +//! +//! The per-pool callback wraps an array of [`AccountAddressPoolFFI`] +//! values; each entry carries the owning account spec, the pool-type +//! discriminant, and a slice of [`CoreAddressEntryFFI`] rows. Swift +//! iterates and persists, the same row shape it already knows from +//! the legacy `on_persist_account_addresses_fn` path. + +use crate::core_address_types::CoreAddressEntryFFI; +use crate::wallet_restore_types::AccountSpecFFI; + +/// A single (account, pool, addresses) snapshot inside the +/// `on_persist_account_address_pools_fn` array. +/// +/// `spec` borrows from a Rust-owned heap allocation that lives until +/// after the callback returns; the xpub-bytes pointer on `spec` is +/// `null` / `0` for this path because the receiver doesn't need it +/// (the spec is used purely as a lookup key into the per-account +/// SwiftData row). +/// +/// `addresses_ptr` borrows a contiguous `[CoreAddressEntryFFI]` slice +/// for the duration of the callback. The strings each entry points +/// at are likewise borrowed from a Rust-owned `CString` pool kept +/// alive in the dispatcher until the callback returns. +#[repr(C)] +pub struct AccountAddressPoolFFI { + /// Account this pool belongs to. The receiver matches by the + /// same fields it uses for `on_persist_account_registrations_fn` + /// (type tag, index, registration index, key class, identity-id + /// pair). The `account_xpub_bytes` pointer is `null` and length + /// is 0 on this path; consumers must not dereference it. + pub account: AccountSpecFFI, + /// Pool variant — `AddressPoolTypeTagFFI` raw value + /// (0 = External, 1 = Internal, 2 = Absent, 3 = AbsentHardened). + pub pool_type_tag: u8, + /// Pointer to a contiguous `[CoreAddressEntryFFI]` array of + /// `addresses_count` entries. Borrowed; valid only for the + /// callback window. + pub addresses_ptr: *const CoreAddressEntryFFI, + pub addresses_count: usize, +} + +// SAFETY: pointers are Rust-owned and outlive the callback window; +// the struct itself is plain data. Send/Sync to match the rest of +// the FFI surface. +unsafe impl Send for AccountAddressPoolFFI {} +unsafe impl Sync for AccountAddressPoolFFI {} + +// Compile-time guard — if anyone reshapes `AccountAddressPoolFFI` +// without also updating the Swift side, cargo builds fail with an +// obvious error rather than producing a dylib that the Swift side +// will mis-parse at runtime (which surfaces as a random +// EXC_BAD_ACCESS in the persistAccountAddressPools callback). +// +// Expected layout on 64-bit targets (all fields in declaration +// order under `#[repr(C)]`): +// +// 0..=99 account AccountSpecFFI (96 bytes +// + 4 bytes inline pad to align +// to the trailing 8-byte +// pointer below — see the +// layout note on AccountSpecFFI) +// ... +// +// The exact internal padding inside `AccountSpecFFI` is fixed by the +// upstream layout guard in `wallet_restore_types`; we only pin the +// outer struct size here. On 64-bit targets the trailing pool fields +// add `1 + 7 (pad) + 8 (ptr) + 8 (len) = 24` bytes after a 96-byte +// account, for a total of 120. +// +// Recompute via `std::mem::size_of` if the spec layout changes. +const _: [u8; 120] = [0u8; std::mem::size_of::()]; +const _: [u8; 8] = [0u8; std::mem::align_of::()]; diff --git a/packages/rs-platform-wallet-ffi/src/wallet_restore_types.rs b/packages/rs-platform-wallet-ffi/src/wallet_restore_types.rs index ab641964177..29c59049a56 100644 --- a/packages/rs-platform-wallet-ffi/src/wallet_restore_types.rs +++ b/packages/rs-platform-wallet-ffi/src/wallet_restore_types.rs @@ -1,11 +1,11 @@ //! C-compatible types for watch-only wallet restore via the load-side //! callbacks on [`PersistenceCallbacks`](crate::persistence::PersistenceCallbacks). //! -//! On write: `on_persist_wallet_root_xpub_fn` and `on_persist_account_fn` -//! fire with these shapes so Swift can store them in SwiftData. +//! On write: `on_persist_account_registrations_fn` fires with the +//! `AccountSpecFFI` shape so Swift can store accounts in SwiftData. //! On load: `on_load_wallet_list_fn` returns an array of //! `WalletRestoreEntryFFI` which Rust assembles into a watch-only -//! `Wallet` via `Wallet::from_xpub` + per-account `Account::from_xpub`. +//! `Wallet` via `Wallet::new_watch_only` + per-account `Account::from_xpub`. //! //! All `*const u8` pointers must stay valid for the duration of the //! load callback. Swift owns the allocation and is asked to free it diff --git a/packages/rs-platform-wallet-ffi/src/xpub_render.rs b/packages/rs-platform-wallet-ffi/src/xpub_render.rs index 632879dcf03..71267619585 100644 --- a/packages/rs-platform-wallet-ffi/src/xpub_render.rs +++ b/packages/rs-platform-wallet-ffi/src/xpub_render.rs @@ -11,7 +11,7 @@ use std::ptr; use crate::error::{PlatformWalletFFIError, PlatformWalletFFIResult}; /// Decode a bincode-encoded `ExtendedPubKey` (as emitted by -/// `on_persist_account_fn`) and render it as a BIP32 base58check +/// `on_persist_account_registrations_fn`) and render it as a BIP32 base58check /// string. No network tag is required — the encoded `ExtendedPubKey` /// carries its own `network` field so the `xpub…`/`tpub…` prefix is /// produced automatically. diff --git a/packages/rs-platform-wallet/Cargo.toml b/packages/rs-platform-wallet/Cargo.toml index 71e0e0e9bc8..13d0ccb00af 100644 --- a/packages/rs-platform-wallet/Cargo.toml +++ b/packages/rs-platform-wallet/Cargo.toml @@ -55,6 +55,9 @@ zip32 = { version = "0.2.0", default-features = false, optional = true } rand = "0.8" static_assertions = "1.1" tracing-subscriber = { version = "0.3", features = ["env-filter"] } +# Re-enable the SDK with mocks feature for test-only mock builders; +# the non-test build keeps the leaner default-feature SDK above. +dash-sdk = { path = "../rs-sdk", default-features = false, features = ["dashpay-contract", "dpns-contract", "mocks"] } [features] diff --git a/packages/rs-platform-wallet/src/changeset/changeset.rs b/packages/rs-platform-wallet/src/changeset/changeset.rs index fedb524b0b0..d1afc6fbee2 100644 --- a/packages/rs-platform-wallet/src/changeset/changeset.rs +++ b/packages/rs-platform-wallet/src/changeset/changeset.rs @@ -28,8 +28,11 @@ use dashcore::Txid; use dash_sdk::platform::address_sync::AddressFunds; use dpp::prelude::AssetLockProof; +use key_wallet::account::AccountType; +use key_wallet::bip32::ExtendedPubKey; +use key_wallet::managed_account::address_pool::AddressPoolType; use key_wallet::managed_account::transaction_record::TransactionRecord; -use key_wallet::{PlatformP2PKHAddress, Utxo}; +use key_wallet::{AddressInfo, Network, PlatformP2PKHAddress, Utxo}; use crate::wallet::platform_wallet::WalletId; @@ -670,54 +673,119 @@ impl Merge for AssetLockChangeSet { // Token Balances // --------------------------------------------------------------------------- -/// Changes to watched Platform token balances. +/// Per-(identity, token) balance changes emitted by +/// [`crate::manager::identity_sync::IdentitySyncManager::sync_now`]. /// -/// Mirrors `PlatformWalletInfo.token_balances` -/// (`BTreeMap<(Identifier, Identifier), TokenAmount>`) and -/// `PlatformWalletInfo.token_watched` -/// (`BTreeMap>`), plus tombstones for -/// entries removed by `unwatch` / `unwatch_identity`. +/// The watch list itself is no longer changeset-replicated — it lives +/// purely in the manager's in-memory cache. Persistence carries only +/// the post-sync balance updates and tombstones. #[derive(Debug, Clone, Default, PartialEq)] pub struct TokenBalanceChangeSet { /// Updated token balances keyed by `(identity_id, token_id)`. /// Last write wins on merge. pub balances: BTreeMap<(Identifier, Identifier), u64>, - /// Balances removed (`unwatch` / `unwatch_identity` / sync returned `None`). + /// Balances removed (sync returned `None`, i.e. the identity no + /// longer holds this token on Platform). pub removed_balances: BTreeSet<(Identifier, Identifier)>, - - /// Tokens newly watched per identity. - /// Merged via set union on the inner `BTreeSet`. - pub watched: BTreeMap>, - - /// Tokens unwatched per identity. - /// Merged via set union on the inner `BTreeSet`. - pub unwatched: BTreeMap>, } impl Merge for TokenBalanceChangeSet { fn merge(&mut self, other: Self) { self.balances.extend(other.balances); self.removed_balances.extend(other.removed_balances); - for (identity_id, tokens) in other.watched { - self.watched.entry(identity_id).or_default().extend(tokens); - } - for (identity_id, tokens) in other.unwatched { - self.unwatched - .entry(identity_id) - .or_default() - .extend(tokens); - } } fn is_empty(&self) -> bool { - self.balances.is_empty() - && self.removed_balances.is_empty() - && self.watched.is_empty() - && self.unwatched.is_empty() + self.balances.is_empty() && self.removed_balances.is_empty() } } +// --------------------------------------------------------------------------- +// Wallet registration metadata + per-account spec / address-pool snapshots +// --------------------------------------------------------------------------- + +/// Per-wallet metadata captured at registration. Carries fields not +/// derivable from the xpub alone: which network the wallet is bound +/// to and the birth-height best estimate (the SPV tip at create time; +/// 0 means "scan from genesis / unknown"). +/// +/// The shape sits on [`PlatformWalletChangeSet`] as +/// `Option` because the round emits at most one +/// metadata blob per wallet — last-write-wins covers the rare race +/// where two registrations fire for the same wallet id. +/// +/// `Network` does not implement `Default`, so this entry intentionally +/// only enters the changeset via explicit construction at registration +/// time; the parent `Option<...>` field stays `None` for every other +/// flush. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct WalletMetadataEntry { + /// Network the wallet is bound to. + pub network: Network, + /// Best estimate of the chain tip at creation time. `0` means + /// "scan from genesis / unknown". + pub birth_height: u32, +} + +/// One entry per registered account. Captures the per-account xpub +/// + type so a future load path can rebuild the wallet watch-only +/// via `Account::from_xpub`. Hardened derivation at the account +/// level means this is the only way to recover without the +/// mnemonic. +/// +/// Carried on [`PlatformWalletChangeSet`] as +/// `Vec`. `AccountType` is `PartialEq` +/// but not `Ord`/`Hash`, so a `BTreeMap` keyed by it isn't possible +/// without a derived index. In practice each account is emitted +/// exactly once per registration round, and the apply path runs +/// these through `Account::from_xpub` which is idempotent on +/// duplicate `(account_type, xpub)` pairs, so the merge policy +/// is simple `extend` and dedup is the apply-side caller's +/// responsibility if it ever matters. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct AccountRegistrationEntry { + /// The account variant being registered. + pub account_type: AccountType, + /// Bincode-encoded extended public key for this account. + pub account_xpub: ExtendedPubKey, +} + +/// Address-pool snapshot for one `(account_type, pool_type)` pair. +/// +/// Routed through the changeset rather than a dedicated trait method +/// so the registration round (metadata + per-account specs + +/// per-pool snapshots) is one atomic +/// [`PlatformWalletPersistence::store`](crate::changeset::PlatformWalletPersistence::store) +/// from the backend's perspective. +/// +/// **Merge policy** on the parent +/// [`PlatformWalletChangeSet::account_address_pools`] field is plain +/// `Vec::extend` — entries are *not* deduplicated by +/// `(account_type, pool_type)`. The FFI emits whole-pool snapshots, +/// so a second snapshot for the same key inside one merged round +/// represents the latest pool state and the apply-time consumer is +/// expected to treat the last entry per `(account_type, pool_type)` +/// as authoritative. Mid-round multi-snapshots for the same key are +/// not produced by any current emitter (snapshots fire at register, +/// pool extension, and used-flag flip — each on a fresh `store` +/// round), so this is a forward-looking documentation of intent +/// rather than a hot path. +/// +/// Not `PartialEq` — `AddressInfo` upstream is `Debug + Clone` only, +/// so structural equality on `addresses` would require us to fork +/// the upstream type. Tests that need to inspect snapshot contents +/// reach into the `addresses` vec by index instead. +#[derive(Debug, Clone)] +pub struct AccountAddressPoolEntry { + /// Which account this pool belongs to. + pub account_type: AccountType, + /// Pool variant (External / Internal / Absent / AbsentHardened). + pub pool_type: AddressPoolType, + /// Snapshot of every `AddressInfo` entry in the pool at emit time. + pub addresses: Vec, +} + // --------------------------------------------------------------------------- // Top-Level PlatformWalletChangeSet // --------------------------------------------------------------------------- @@ -768,6 +836,18 @@ pub struct PlatformWalletChangeSet { /// semantics as `dashpay_profiles` — extends existing payment maps /// via `BTreeMap::extend` (last-write-wins per tx_id). pub dashpay_payments_overlay: Option>>, + /// Per-wallet metadata emitted once at registration. See + /// [`WalletMetadataEntry`] for the merge policy. + pub wallet_metadata: Option, + /// Per-account registration entries emitted at registration / on + /// later `add_account` calls. See [`AccountRegistrationEntry`] for + /// the merge policy (plain `Vec::extend`, dedup is the apply-side + /// caller's job). + pub account_registrations: Vec, + /// Address-pool snapshots emitted at wallet create (initial + /// gap-limit population) and on any pool extension / "used" flip. + /// See [`AccountAddressPoolEntry`] for the merge policy. + pub account_address_pools: Vec, } impl From for PlatformWalletChangeSet { @@ -849,6 +929,21 @@ impl Merge for PlatformWalletChangeSet { target.entry(id).or_default().extend(payments); } } + // Wallet metadata: last-write-wins. `Network` doesn't + // implement `Default`, so we can't lean on the `Option: + // Merge` blanket impl (which requires `T: Merge: Default`); + // instead, `Some(other) -> overwrite`, `None -> keep current`. + if let Some(meta) = other.wallet_metadata { + self.wallet_metadata = Some(meta); + } + // Per-account specs and address-pool snapshots: append-only. + // See the type docstrings for the rationale (registration + // round emits each key once; snapshots are whole-pool, so + // duplicate keys within one merged round are a no-op). + self.account_registrations + .extend(other.account_registrations); + self.account_address_pools + .extend(other.account_address_pools); } fn is_empty(&self) -> bool { @@ -864,6 +959,9 @@ impl Merge for PlatformWalletChangeSet { .dashpay_payments_overlay .as_ref() .is_none_or(|m| m.is_empty()) + && self.wallet_metadata.is_none() + && self.account_registrations.is_empty() + && self.account_address_pools.is_empty() } } @@ -917,23 +1015,22 @@ mod tests { let mut a = TokenBalanceChangeSet::default(); a.balances.insert((identity_a, token_x), 100); - a.watched.entry(identity_a).or_default().insert(token_x); + a.removed_balances.insert((identity_a, token_y)); let mut b = TokenBalanceChangeSet::default(); - // Same identity/token — last-write-wins. + // Same identity/token — last-write-wins on balances. b.balances.insert((identity_a, token_x), 200); - // New token on same identity — merged into the watched set. - b.watched.entry(identity_a).or_default().insert(token_y); // New identity. b.balances.insert((identity_b, token_x), 50); + // Tombstone propagates as set union. + b.removed_balances.insert((identity_b, token_y)); a.merge(b); assert_eq!(a.balances.get(&(identity_a, token_x)), Some(&200)); assert_eq!(a.balances.get(&(identity_b, token_x)), Some(&50)); - let watched_a = a.watched.get(&identity_a).unwrap(); - assert!(watched_a.contains(&token_x)); - assert!(watched_a.contains(&token_y)); + assert!(a.removed_balances.contains(&(identity_a, token_y))); + assert!(a.removed_balances.contains(&(identity_b, token_y))); } #[test] diff --git a/packages/rs-platform-wallet/src/changeset/core_bridge.rs b/packages/rs-platform-wallet/src/changeset/core_bridge.rs index ddfcfe956e0..4c2a07dfc40 100644 --- a/packages/rs-platform-wallet/src/changeset/core_bridge.rs +++ b/packages/rs-platform-wallet/src/changeset/core_bridge.rs @@ -1,24 +1,28 @@ //! Adapter that turns upstream `WalletEvent`s into `PlatformWalletChangeSet`s. //! -//! Upstream `key_wallet_manager` no longer carries a `WalletPersistence` -//! callback — each `WalletManager` exposes a `broadcast::Sender` -//! and consumers subscribe at startup. [`WalletEventAdapter`] is the -//! platform-wallet-side subscriber: a tokio task that drains the event -//! stream, projects each event into a [`CoreChangeSet`], wraps it in a -//! [`PlatformWalletChangeSet`], and forwards to the platform persister. +//! Upstream `key_wallet_manager::WalletManager` exposes a +//! `broadcast::Sender` and a `subscribe_events()` accessor +//! returning a `broadcast::Receiver`; consumers attach at +//! startup and drain the stream. [`spawn_wallet_event_adapter`] is the +//! platform-wallet-side consumer: a tokio task that pulls events off +//! that broadcast, projects each one into a +//! [`CoreChangeSet`](crate::changeset::CoreChangeSet), wraps it in a +//! [`PlatformWalletChangeSet`](crate::changeset::PlatformWalletChangeSet), +//! and forwards to the [`PlatformWalletPersistence`] sink. //! //! # Why a single subscriber, not per-wallet //! -//! `WalletManager::subscribe_events` returns a `broadcast::Receiver` that -//! sees every event for every wallet. The adapter routes by `wallet_id` -//! at projection time — there's no need to spawn a task per wallet. +//! The broadcast channel emits every event for every wallet. Each +//! event already carries a `wallet_id`, which the adapter forwards +//! verbatim to [`PlatformWalletPersistence::store`] — no need to fan +//! out a subscriber per wallet. //! //! # Lifetime //! //! [`spawn_wallet_event_adapter`] returns a [`JoinHandle`]. The caller -//! (typically `PlatformWalletManager`) keeps it for the manager's -//! lifetime; on shutdown, fire the [`CancellationToken`] to make the -//! task exit cleanly. +//! (typically `PlatformWalletManager`) keeps the handle for the +//! manager's lifetime; on shutdown, fire the [`CancellationToken`] to +//! make the task exit cleanly. use std::sync::Arc; @@ -44,17 +48,25 @@ use crate::wallet::platform_wallet::PlatformWalletInfo; /// runtime), then loops dispatching events to the persister via /// [`PlatformWalletPersistence::store`]. Exits when `cancel` fires /// or the upstream broadcast channel closes. -pub fn spawn_wallet_event_adapter( +/// +/// Generic over `P` so the spawned task gets static-dispatch on +/// every `persister.store(...)` call. Pass the manager's own +/// `Arc

` (not the `Arc` +/// coercion) to actually realize the static-dispatch win. +pub fn spawn_wallet_event_adapter

( wallet_manager: Arc>>, - persister: Arc, + persister: Arc

, cancel: CancellationToken, -) -> JoinHandle<()> { +) -> JoinHandle<()> +where + P: PlatformWalletPersistence + 'static, +{ tokio::spawn(async move { let mut receiver = { let guard = wallet_manager.read().await; guard.subscribe_events() }; - tracing::debug!("WalletEventAdapter task started"); + tracing::debug!("wallet-event adapter task started"); loop { tokio::select! { @@ -93,7 +105,7 @@ pub fn spawn_wallet_event_adapter( Err(RecvError::Lagged(n)) => { tracing::warn!( missed = n, - "WalletEventAdapter lagged on broadcast channel; some events were dropped" + "wallet-event adapter lagged on broadcast channel; some events were dropped" ); } } @@ -101,7 +113,7 @@ pub fn spawn_wallet_event_adapter( _ = cancel.cancelled() => break, } } - tracing::debug!("WalletEventAdapter task exiting"); + tracing::debug!("wallet-event adapter task exiting"); }) } diff --git a/packages/rs-platform-wallet/src/changeset/mod.rs b/packages/rs-platform-wallet/src/changeset/mod.rs index 9cc2f559a51..bd6650431fe 100644 --- a/packages/rs-platform-wallet/src/changeset/mod.rs +++ b/packages/rs-platform-wallet/src/changeset/mod.rs @@ -19,11 +19,11 @@ pub mod platform_address_sync_start_state; pub mod traits; pub use changeset::{ - AssetLockChangeSet, AssetLockEntry, ContactChangeSet, ContactRequestEntry, CoreChangeSet, - IdentityChangeSet, IdentityEntry, IdentityKeyDerivationIndices, IdentityKeyEntry, - IdentityKeysChangeSet, PlatformAddressBalanceEntry, PlatformAddressChangeSet, - PlatformWalletChangeSet, ReceivedContactRequestKey, SentContactRequestKey, - TokenBalanceChangeSet, + AccountAddressPoolEntry, AccountRegistrationEntry, AssetLockChangeSet, AssetLockEntry, + ContactChangeSet, ContactRequestEntry, CoreChangeSet, IdentityChangeSet, IdentityEntry, + IdentityKeyDerivationIndices, IdentityKeyEntry, IdentityKeysChangeSet, + PlatformAddressBalanceEntry, PlatformAddressChangeSet, PlatformWalletChangeSet, + ReceivedContactRequestKey, SentContactRequestKey, TokenBalanceChangeSet, WalletMetadataEntry, }; pub use client_start_state::ClientStartState; pub use client_wallet_start_state::ClientWalletStartState; diff --git a/packages/rs-platform-wallet/src/changeset/traits.rs b/packages/rs-platform-wallet/src/changeset/traits.rs index 8ea855bd1ff..81329d0c385 100644 --- a/packages/rs-platform-wallet/src/changeset/traits.rs +++ b/packages/rs-platform-wallet/src/changeset/traits.rs @@ -3,12 +3,6 @@ //! Implementors choose their own storage engine (SQLite, file, memory, remote). //! The traits guarantee that deltas are persisted atomically. -use key_wallet::account::AccountType; -use key_wallet::bip32::ExtendedPubKey; -use key_wallet::managed_account::address_pool::AddressPoolType; -use key_wallet::AddressInfo; -use key_wallet::Network; - use crate::changeset::changeset::PlatformWalletChangeSet; use crate::changeset::client_start_state::ClientStartState; use crate::wallet::platform_wallet::WalletId; @@ -154,65 +148,4 @@ pub trait PlatformWalletPersistence: Send + Sync { /// already keyed by wallet id and the sub-changesets carry their own /// wallet attribution where needed. fn load(&self) -> Result; - - /// Persist an account entry for `wallet_id`. Called on account - /// insertion (initial registration or later `add_account` calls). - /// - /// This captures enough material to reconstruct every account of a - /// wallet as watch-only via `Account::from_xpub` at load time — - /// hardened derivation at the account level means the per-account - /// xpub is the only way to get it back without the mnemonic. The - /// `wallet_id` travels through `store_wallet_metadata`; paired with - /// this call, `Wallet::new_watch_only(network, wallet_id, accounts)` - /// has everything it needs. - /// - /// Default implementation is a no-op. - fn store_account( - &self, - wallet_id: WalletId, - account_type: &AccountType, - account_xpub: &ExtendedPubKey, - ) -> Result<(), PersistenceError> { - let _ = (wallet_id, account_type, account_xpub); - Ok(()) - } - - /// Persist per-wallet metadata that isn't derivable from the xpub - /// alone: the network this wallet is bound to and the birth height - /// (best estimate of the chain tip at creation time; 0 means - /// "scan from genesis / unknown"). - /// - /// Called once at registration alongside - /// [`store_wallet_root_xpub`](Self::store_wallet_root_xpub). - /// - /// Default implementation is a no-op. - fn store_wallet_metadata( - &self, - wallet_id: WalletId, - network: Network, - birth_height: u32, - ) -> Result<(), PersistenceError> { - let _ = (wallet_id, network, birth_height); - Ok(()) - } - - /// Persist every [`AddressInfo`] from one of an account's address - /// pools. Called per pool — callers emit external then internal - /// (or the single pool for simpler account types) so the entries - /// slice is homogeneous with respect to `pool_type`. - /// - /// Called at wallet create (initial gap-limit population) and on - /// any pool extension / "used" flip that happens during sync. - /// - /// Default implementation is a no-op. - fn store_account_addresses( - &self, - wallet_id: WalletId, - account_type: &AccountType, - pool_type: AddressPoolType, - addresses: &[AddressInfo], - ) -> Result<(), PersistenceError> { - let _ = (wallet_id, account_type, pool_type, addresses); - Ok(()) - } } diff --git a/packages/rs-platform-wallet/src/events.rs b/packages/rs-platform-wallet/src/events.rs index 653c48e8aa5..5111636b459 100644 --- a/packages/rs-platform-wallet/src/events.rs +++ b/packages/rs-platform-wallet/src/events.rs @@ -16,7 +16,7 @@ use arc_swap::ArcSwap; pub use dash_spv::EventHandler; pub use key_wallet_manager::WalletEvent; -use crate::platform_address_sync::PlatformAddressSyncSummary; +use crate::manager::platform_address_sync::PlatformAddressSyncSummary; /// Extension of [`EventHandler`] for platform-wallet consumers. /// @@ -28,7 +28,7 @@ pub trait PlatformEventHandler: EventHandler { /// /// Default impl is a no-op so existing handlers don't have to care. /// - /// [`PlatformAddressSyncManager`]: crate::platform_address_sync::PlatformAddressSyncManager + /// [`PlatformAddressSyncManager`]: crate::manager::platform_address_sync::PlatformAddressSyncManager fn on_platform_address_sync_completed(&self, _summary: &PlatformAddressSyncSummary) {} } diff --git a/packages/rs-platform-wallet/src/lib.rs b/packages/rs-platform-wallet/src/lib.rs index 93e2d43d1ac..625bd5f489f 100644 --- a/packages/rs-platform-wallet/src/lib.rs +++ b/packages/rs-platform-wallet/src/lib.rs @@ -17,18 +17,22 @@ pub mod changeset; pub mod error; pub mod events; pub mod manager; -pub mod platform_address_sync; pub mod spv; pub mod wallet; pub use error::PlatformWalletError; pub use events::{PlatformEventHandler, PlatformEventManager}; pub use key_wallet::wallet::managed_wallet_info::asset_lock_builder::AssetLockFundingType; -pub use manager::PlatformWalletManager; -pub use platform_address_sync::{ +pub use manager::identity_sync::{ + IdentitySyncManager, IdentityTokenSyncInfo, IdentityTokenSyncState, + DEFAULT_SYNC_INTERVAL_SECS as IDENTITY_SYNC_DEFAULT_INTERVAL_SECS, + MAX_TOKENS_PER_BALANCE_BATCH as IDENTITY_SYNC_MAX_TOKENS_PER_BATCH, +}; +pub use manager::platform_address_sync::{ PlatformAddressSyncManager, PlatformAddressSyncSummary, WalletSyncOutcome, DEFAULT_SYNC_INTERVAL_SECS, }; +pub use manager::PlatformWalletManager; pub use spv::SpvRuntime; pub use wallet::asset_lock::manager::AssetLockManager; pub use wallet::asset_lock::tracked::{AssetLockStatus, TrackedAssetLock}; @@ -51,7 +55,6 @@ pub use wallet::identity::{ pub use wallet::platform_wallet::PlatformWalletInfo; pub use wallet::PlatformAddressTag; pub use wallet::PlatformWallet; -pub use wallet::TokenWallet; // Re-export changeset types for caller-level staging. pub use changeset::Merge; diff --git a/packages/rs-platform-wallet/src/manager/accessors.rs b/packages/rs-platform-wallet/src/manager/accessors.rs index a946b80b15b..7a5d34292ea 100644 --- a/packages/rs-platform-wallet/src/manager/accessors.rs +++ b/packages/rs-platform-wallet/src/manager/accessors.rs @@ -3,7 +3,8 @@ use std::sync::Arc; use crate::changeset::PlatformWalletPersistence; -use crate::platform_address_sync::PlatformAddressSyncManager; +use crate::manager::identity_sync::IdentitySyncManager; +use crate::manager::platform_address_sync::PlatformAddressSyncManager; use crate::spv::SpvRuntime; use crate::wallet::platform_wallet::WalletId; use crate::wallet::PlatformWallet; @@ -18,25 +19,37 @@ impl PlatformWalletManager

{ /// Access the SPV runtime for sync control. pub fn spv(&self) -> &SpvRuntime { - &self.spv + &self.spv_manager } /// Clone the `Arc` so callers (e.g. FFI) can invoke /// [`SpvRuntime::spawn_in_background`] which takes `&Arc`. pub fn spv_arc(&self) -> Arc { - Arc::clone(&self.spv) + Arc::clone(&self.spv_manager) } /// Access the platform-address sync coordinator. pub fn platform_address_sync(&self) -> &PlatformAddressSyncManager { - &self.platform_address_sync + &self.platform_address_sync_manager } /// Clone the `Arc` so callers (e.g. FFI) /// can invoke [`PlatformAddressSyncManager::start`] which takes /// `&Arc`. pub fn platform_address_sync_arc(&self) -> Arc { - Arc::clone(&self.platform_address_sync) + Arc::clone(&self.platform_address_sync_manager) + } + + /// Access the per-identity token state sync coordinator. + pub fn identity_sync(&self) -> &IdentitySyncManager

{ + &self.identity_sync_manager + } + + /// Clone the `Arc>` so callers (e.g. FFI) + /// can invoke [`IdentitySyncManager::start`] which takes + /// `&Arc`. + pub fn identity_sync_arc(&self) -> Arc> { + Arc::clone(&self.identity_sync_manager) } /// Get a clone of a wallet by its ID. diff --git a/packages/rs-platform-wallet/src/manager/identity_sync.rs b/packages/rs-platform-wallet/src/manager/identity_sync.rs new file mode 100644 index 00000000000..566372ab2b2 --- /dev/null +++ b/packages/rs-platform-wallet/src/manager/identity_sync.rs @@ -0,0 +1,841 @@ +//! Periodic per-identity token state sync coordinator. +//! +//! Self-contained — does not reach into [`PlatformWallet`] or +//! [`WalletManager`]. The manager owns its own identity → token +//! registry (the in-memory `state` cache *is* the registry; there is +//! no second source of truth). Callers add and remove identities and +//! the watched-token sets through the lifecycle API +//! ([`register_identity`](Self::register_identity), +//! [`unregister_identity`](Self::unregister_identity), +//! [`update_watched_tokens`](Self::update_watched_tokens)). +//! +//! Each pass walks every registered identity, snapshots its watched +//! token list, then sequentially: +//! +//! 1. Chunks the token list into batches of at most +//! [`MAX_TOKENS_PER_BALANCE_BATCH`] and queries Platform once per +//! batch via `IdentityTokenBalancesQuery` / +//! [`TokenAmount::fetch_many`]. Sequential (no parallelism across +//! batches or identities) — see crate-level note on SDK `!Send` +//! futures. +//! 2. Builds a [`TokenBalanceChangeSet`] from the batch result and +//! forwards it to the persister. The wallet-side write path +//! (`TokenWallet::sync` mutating `PlatformWalletInfo.token_balances` +//! directly) is unrelated and untouched. +//! 3. Updates the manager's own per-identity cache row in lockstep, +//! so callers reading [`state_for_identity`](Self::state_for_identity) +//! after [`sync_now`](Self::sync_now) returns see fresh values. +//! 4. Stamps `last_sync_unix` for the identity. +//! +//! Per-(identity, contract) nonce fetching: the registry is keyed by +//! `(identity_id, token_id)` — it doesn't carry `contract_id` today, +//! and Platform requires a separate per-token data-contract fetch to +//! resolve it. Per the design brief, the cache field is plumbed +//! through (`IdentityTokenSyncInfo::contract_id` / +//! `identity_contract_nonce`) but the actual nonce fetch is a +//! follow-up — see the TODO inside [`IdentitySyncManager::sync_now`] +//! and the matching note on [`IdentityTokenSyncInfo::contract_id`]. +//! +//! Persister wiring caveat: the manager is identity-scoped, but +//! [`PlatformWalletPersistence::store`] takes a `WalletId`. The +//! changesets written here use [`WalletId::default()`] (`[0u8; 32]`) +//! as a sentinel — token-balance persistence on the FFI / SQLite side +//! is keyed by `(identity_id, token_id)`, so the wallet id is unused +//! on that callback path. +//! +//! Not auto-started. Call [`IdentitySyncManager::start`] once +//! identities are registered and the SDK is connected. + +use std::collections::BTreeMap; +use std::sync::{ + atomic::{AtomicBool, AtomicU64, Ordering}, + Arc, Mutex as StdMutex, +}; +use std::time::{Duration, SystemTime, UNIX_EPOCH}; + +use dpp::balances::credits::TokenAmount; +use dpp::prelude::Identifier; +use tokio::sync::RwLock; +use tokio_util::sync::CancellationToken; + +use dash_sdk::platform::tokens::identity_token_balances::{ + IdentityTokenBalances, IdentityTokenBalancesQuery, +}; +use dash_sdk::platform::FetchMany; + +use crate::changeset::{PlatformWalletPersistence, TokenBalanceChangeSet}; +use crate::wallet::platform_wallet::WalletId; + +/// Default cadence for the identity-token sync loop. +/// +/// Token state moves more slowly than UTXO balance — picking 60s here +/// keeps the loop quiet on idle wallets while still catching transfers +/// inside a minute. Tunable at runtime via +/// [`IdentitySyncManager::set_interval`]; this value is just the +/// startup default. +pub const DEFAULT_SYNC_INTERVAL_SECS: u64 = 60; + +/// Maximum number of token ids fetched in a single +/// `IdentityTokenBalancesQuery`. +/// +/// Platform tolerates larger batches but DAPI rate-limits and proof +/// sizes start to bite past ~100 entries. Keeping this conservative +/// also keeps each round-trip's wall time bounded so the sequential +/// pass doesn't stall behind one slow request. +pub const MAX_TOKENS_PER_BALANCE_BATCH: usize = 100; + +/// One row of the per-identity token cache held by +/// [`IdentitySyncManager`]. +/// +/// Carries the canonical balance for a `(identity_id, token_id)` pair +/// plus the contract context needed to drive token state transitions +/// originating from this identity. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub struct IdentityTokenSyncInfo { + /// The token id this row tracks. + pub token_id: Identifier, + /// Data contract that owns this token. Multiple tokens issued + /// from the same data contract share a single per-identity nonce + /// — see [`Self::identity_contract_nonce`]. + /// + /// Currently a placeholder (`Identifier::default()`) until token → + /// contract resolution lands. The watch registry is keyed by + /// token id only, and resolving the contract id requires a per- + /// token data-contract fetch — deferred until the registry + /// carries the field directly. The cache shape includes it now + /// so consumers don't have to re-plumb later. + pub contract_id: Identifier, + /// Latest balance reported by Platform. + pub balance: TokenAmount, + /// `IdentityContractNonce` this identity would use for the next + /// state transition against `contract_id`. Same value across + /// every row in the cache that shares an `(identity_id, + /// contract_id)`. Replicated onto each token row so the FFI + /// mirror can stay a flat array. + /// + /// `0` means "not fetched yet" until contract id resolution is + /// wired up. + pub identity_contract_nonce: u64, +} + +/// Per-identity token sync state held by [`IdentitySyncManager`]. +/// +/// Identity-scoped, not wallet-scoped — the manager has no wallet +/// linkage of its own; per-wallet aggregation, if needed, lives a +/// layer above. +#[derive(Debug, Clone)] +pub struct IdentityTokenSyncState { + /// The identity these token rows belong to. + pub identity_id: Identifier, + /// Unix seconds at which the most recent pass for this identity + /// completed. `0` means "no pass has completed". + pub last_sync_unix: u64, + /// One row per watched token for this identity. + pub tokens: Vec, +} + +/// Periodic per-identity token state sync coordinator. +/// +/// Self-contained: drives only its own registry / cache and writes +/// balance updates through an injected +/// [`PlatformWalletPersistence`] handle. No coupling to +/// [`PlatformWallet`](crate::wallet::PlatformWallet) or +/// `WalletManager`. +/// +/// `sync_now` is re-entrant-safe: if a pass is already running, +/// calling `sync_now` again returns immediately without doing any +/// work (the caller can check `is_syncing()` to distinguish). +pub struct IdentitySyncManager

+where + P: PlatformWalletPersistence + 'static, +{ + /// SDK handle used to issue `IdentityTokenBalancesQuery` / + /// `TokenAmount::fetch_many` from the sync loop. + sdk: Arc, + /// Persister for [`TokenBalanceChangeSet`] writes. Identity-scoped + /// changesets travel under [`WalletId::default()`] since this + /// manager is not wallet-scoped — see crate-level docs. Generic + /// over `P` so every `persister.store(...)` call on the hot sync + /// loop dispatches statically. + persister: Arc

, + /// Cancel token for the background loop, if running. + background_cancel: StdMutex>, + interval_secs: AtomicU64, + is_syncing: AtomicBool, + /// Unix seconds of the last completed pass across all identities. + /// `0` = never. Identity-level timestamps live on the per-identity + /// rows in [`IdentitySyncManager::state`]. + last_sync_unix: AtomicU64, + /// Per-identity registry / cache. Keyed by identity id; each row + /// carries the per-(identity, token) token rows plus the + /// per-identity last-sync timestamp. + /// + /// This is *the* registry — there is no separate "watched tokens" + /// store. Lifecycle methods + /// ([`register_identity`](Self::register_identity), + /// [`update_watched_tokens`](Self::update_watched_tokens), + /// [`unregister_identity`](Self::unregister_identity)) operate on + /// this map and the sync loop iterates it. + state: RwLock>, +} + +impl

IdentitySyncManager

+where + P: PlatformWalletPersistence + 'static, +{ + /// Construct a new manager. Pass an SDK handle (for token-balance + /// fetches) and a persister handle (for `TokenBalanceChangeSet` + /// writes). The registry starts empty — call + /// [`register_identity`](Self::register_identity) before + /// [`start`](Self::start). + pub fn new(sdk: Arc, persister: Arc

) -> Self { + Self { + sdk, + persister, + background_cancel: StdMutex::new(None), + interval_secs: AtomicU64::new(DEFAULT_SYNC_INTERVAL_SECS), + is_syncing: AtomicBool::new(false), + last_sync_unix: AtomicU64::new(0), + state: RwLock::new(BTreeMap::new()), + } + } + + /// Add or replace the registry row for `identity_id`. + /// + /// Idempotent — calling with the same identity overwrites the + /// existing row and resets `last_sync_unix` to `0`. Each token + /// in `token_ids` becomes a watched row with `balance = 0`, + /// `contract_id = Identifier::default()`, + /// `identity_contract_nonce = 0`. The next sync pass populates + /// real values. + pub async fn register_identity(&self, identity_id: Identifier, token_ids: I) + where + I: IntoIterator, + { + let tokens: Vec = token_ids + .into_iter() + .map(|token_id| IdentityTokenSyncInfo { + token_id, + contract_id: Identifier::default(), + balance: 0, + identity_contract_nonce: 0, + }) + .collect(); + let mut state = self.state.write().await; + state.insert( + identity_id, + IdentityTokenSyncState { + identity_id, + last_sync_unix: 0, + tokens, + }, + ); + } + + /// Remove the registry row for `identity_id`. + /// + /// Idempotent — a no-op if the identity isn't registered. + pub async fn unregister_identity(&self, identity_id: &Identifier) { + let mut state = self.state.write().await; + state.remove(identity_id); + } + + /// Replace the watched-token list for an already-registered + /// identity. + /// + /// Tokens that appear in both the old and new lists keep their + /// cached balance / contract / nonce. Tokens that drop out of the + /// new list are removed from the cache. Tokens new to the list + /// are inserted with `balance = 0` and zero placeholders, just + /// like [`register_identity`](Self::register_identity). + /// + /// If the identity isn't registered, this is a no-op — callers + /// must call [`register_identity`](Self::register_identity) + /// first. This is the conservative choice: silently promoting an + /// unknown identity to a registered one would mask programmer + /// errors at the caller (e.g. typoed identity ids that today + /// would surface as "balance never updates", with this method + /// promoting them they'd surface as "spurious DAPI traffic for an + /// identity I never meant to track"). + pub async fn update_watched_tokens(&self, identity_id: Identifier, token_ids: I) + where + I: IntoIterator, + { + let new_tokens: Vec = token_ids.into_iter().collect(); + let mut state = self.state.write().await; + let Some(row) = state.get_mut(&identity_id) else { + return; + }; + // Index existing rows by token id so we can preserve balances + // for tokens still in the new set. + let existing: BTreeMap = row + .tokens + .iter() + .map(|info| (info.token_id, *info)) + .collect(); + + let merged: Vec = new_tokens + .into_iter() + .map(|token_id| { + existing + .get(&token_id) + .copied() + .unwrap_or(IdentityTokenSyncInfo { + token_id, + contract_id: Identifier::default(), + balance: 0, + identity_contract_nonce: 0, + }) + }) + .collect(); + + row.tokens = merged; + } + + /// Set the polling interval. Clamped to a minimum of 1s. + /// + /// The running loop picks this up on its next sleep. + pub fn set_interval(&self, interval: Duration) { + let secs = interval.as_secs().max(1); + self.interval_secs.store(secs, Ordering::Release); + } + + /// Current polling interval. + pub fn interval(&self) -> Duration { + Duration::from_secs(self.interval_secs.load(Ordering::Acquire)) + } + + /// Whether the background loop is currently running. + pub fn is_running(&self) -> bool { + self.background_cancel + .lock() + .map(|g| g.is_some()) + .unwrap_or(false) + } + + /// Whether a sync pass is in flight right now. + pub fn is_syncing(&self) -> bool { + self.is_syncing.load(Ordering::Acquire) + } + + /// Unix seconds of the last completed pass (across all identities), + /// or `None` if no pass has ever completed. + pub fn last_sync_unix_seconds(&self) -> Option { + match self.last_sync_unix.load(Ordering::Acquire) { + 0 => None, + n => Some(n), + } + } + + /// Per-identity last-sync timestamp. + /// + /// Returns `None` if the identity has never been synced (or isn't + /// known to this manager yet). + pub async fn last_sync_unix_for_identity(&self, identity_id: &Identifier) -> Option { + let state = self.state.read().await; + state + .get(identity_id) + .map(|s| s.last_sync_unix) + .filter(|t| *t != 0) + } + + /// Snapshot the cache row for a single identity, cloned out so + /// callers don't hold the manager's read lock across their work. + pub async fn state_for_identity( + &self, + identity_id: &Identifier, + ) -> Option { + let state = self.state.read().await; + state.get(identity_id).cloned() + } + + /// Snapshot the entire per-identity cache. Cheap clone of a + /// `BTreeMap` — every value + /// itself owns a `Vec`, so the clone is + /// O(total tokens). Used by the FFI snapshot path. + pub async fn all_state(&self) -> BTreeMap { + let state = self.state.read().await; + state.clone() + } + + /// Start the background sync loop. Idempotent — calling while + /// already running is a no-op. + /// + /// The loop runs on a dedicated OS thread, not on a tokio worker. + /// This is forced on us by the fact that the SDK token-fetch + /// futures are `!Send` (the GRPC client state inside the SDK + /// isn't `Send + Sync`), so they can't ride on `tokio::spawn`, + /// which demands `Future: Send + 'static`. We use + /// [`tokio::runtime::Handle::block_on`] so the future still has + /// access to the main runtime's reactor for network I/O — only + /// the polling thread is dedicated. + /// + /// The first pass runs immediately; subsequent passes fire every + /// [`interval`](Self::interval). + pub fn start(self: Arc) { + let mut guard = self.background_cancel.lock().expect("bg_cancel poisoned"); + if guard.is_some() { + return; + } + let cancel = CancellationToken::new(); + *guard = Some(cancel.clone()); + drop(guard); + + let handle = tokio::runtime::Handle::current(); + let this = self; + std::thread::Builder::new() + .name("identity-sync".into()) + .spawn(move || { + handle.block_on(async move { + loop { + if cancel.is_cancelled() { + break; + } + + this.sync_now().await; + + let interval = this.interval(); + tokio::select! { + _ = tokio::time::sleep(interval) => {} + _ = cancel.cancelled() => break, + } + } + + if let Ok(mut guard) = this.background_cancel.lock() { + *guard = None; + } + }); + }) + .expect("failed to spawn identity-sync thread"); + } + + /// Stop the background sync loop. No-op if not running. + pub fn stop(&self) { + if let Some(token) = self + .background_cancel + .lock() + .expect("bg_cancel poisoned") + .take() + { + token.cancel(); + } + } + + /// Run one sync pass across every registered identity. + /// + /// If a pass is already in flight, returns immediately without + /// doing any work (re-entrant safe). + /// + /// Iteration order: identities in `Identifier` order, batches in + /// token-id order. Sequential — no parallelism across batches or + /// identities — both because the SDK token-fetch futures are + /// `!Send` (no `tokio::spawn`) and because the design brief + /// explicitly forbids it. + pub async fn sync_now(&self) { + if self + .is_syncing + .compare_exchange(false, true, Ordering::AcqRel, Ordering::Acquire) + .is_err() + { + return; + } + + // Snapshot the per-identity watch list under a short read + // lock and release it before any network call. We keep + // `Vec` in token-id order so each batch chunk is + // deterministic. + let per_identity: BTreeMap> = { + let state = self.state.read().await; + state + .iter() + .filter(|(_, row)| !row.tokens.is_empty()) + .map(|(iid, row)| (*iid, row.tokens.iter().map(|t| t.token_id).collect())) + .collect() + }; + + for (identity_id, token_ids) in per_identity { + self.sync_identity(identity_id, &token_ids).await; + } + + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .map(|d| d.as_secs()) + .unwrap_or(0); + self.last_sync_unix.store(now, Ordering::Release); + self.is_syncing.store(false, Ordering::Release); + } + + /// Sync a single identity's watched tokens against Platform. + /// + /// Splits `token_ids` into batches of at most + /// [`MAX_TOKENS_PER_BALANCE_BATCH`], issues one + /// `IdentityTokenBalancesQuery` per batch, builds a + /// [`TokenBalanceChangeSet`] for the persister, and rewrites this + /// manager's per-identity cache row. + /// + /// Errors are logged and the per-identity row is left at its + /// previous value rather than being cleared — a transient DAPI + /// error shouldn't drop the cached balance from under the UI. + async fn sync_identity(&self, identity_id: Identifier, token_ids: &[Identifier]) { + if token_ids.is_empty() { + return; + } + + // Accumulate balances across batches before touching the + // cache or the persister — keep batch network calls unlocked. + let mut fresh_balances: BTreeMap> = BTreeMap::new(); + + for chunk in token_ids.chunks(MAX_TOKENS_PER_BALANCE_BATCH) { + let query = IdentityTokenBalancesQuery { + identity_id, + token_ids: chunk.to_vec(), + }; + + // Type-annotate the call site explicitly: `fetch_many` + // is generic over the response type, and the inference + // chain through `RetrievedObjects` doesn't pick a unique + // implementor without a hint. Same pattern used by + // `TokenWallet::sync`. + let fetched: Result = + TokenAmount::fetch_many(self.sdk.as_ref(), query).await; + match fetched { + Ok(result) => { + for (token_id, maybe_balance) in result.iter() { + fresh_balances.insert(*token_id, *maybe_balance); + } + } + Err(e) => { + tracing::warn!( + identity_id = %identity_id, + chunk_len = chunk.len(), + error = %e, + "identity-sync: token balance batch failed; leaving cache untouched for this batch" + ); + // Skip this batch; do not poison fresh_balances. + // Other batches for this identity may still + // succeed. + } + } + } + + if fresh_balances.is_empty() { + return; + } + + // Build the changeset and update our own cache in lockstep. + let mut cs = TokenBalanceChangeSet::default(); + for (token_id, maybe_balance) in &fresh_balances { + let key = (identity_id, *token_id); + match maybe_balance { + Some(amount) => { + cs.balances.insert(key, *amount); + } + None => { + cs.removed_balances.insert(key); + } + } + } + + // The persister API is wallet-scoped (`store(wallet_id, ..)`) + // but this manager is identity-scoped. Use the zero-byte + // sentinel — the FFI / SQLite token-balance write paths key + // their rows by `(identity_id, token_id)` and ignore the + // wallet id on this changeset. + let sentinel: WalletId = WalletId::default(); + if let Err(e) = self.persister.store(sentinel, cs.into()) { + tracing::error!( + identity_id = %identity_id, + error = %e, + "identity-sync: failed to persist token balance changeset" + ); + } + + // TODO(identity-sync nonce): once token-id → contract-id + // resolution lands on the registry (currently keyed by token + // id only), fetch the per-(identity, contract) nonce here via + // `self.sdk.get_identity_contract_nonce(identity_id, + // contract_id, false, None).await` and replicate it onto + // every token row that shares the same contract. The + // `IdentityTokenSyncInfo::contract_id` field is plumbed + // through with a `Identifier::default()` placeholder so the + // FFI mirror shape doesn't have to change when this lands. + + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .map(|d| d.as_secs()) + .unwrap_or(0); + + // Rewrite the per-identity cache row from the freshly fetched + // balances. Tokens that returned `None` (i.e. removed on + // Platform) drop out of the row; tokens that returned `Some` + // get the new balance. We rebuild rather than splice so that + // the row always reflects the latest watched-token set + // intersected with what Platform reports. + let mut state = self.state.write().await; + if let Some(existing_row) = state.get(&identity_id).cloned() { + // Map each currently-watched token to its new info: keep + // the old contract / nonce placeholders, swap in the + // fresh balance if we got one, drop the row entirely if + // Platform removed it. + let prior_by_id: BTreeMap = existing_row + .tokens + .iter() + .map(|info| (info.token_id, *info)) + .collect(); + + let mut new_tokens: Vec = Vec::with_capacity(token_ids.len()); + for token_id in token_ids { + match fresh_balances.get(token_id) { + Some(Some(amount)) => { + let prior = + prior_by_id + .get(token_id) + .copied() + .unwrap_or(IdentityTokenSyncInfo { + token_id: *token_id, + contract_id: Identifier::default(), + balance: 0, + identity_contract_nonce: 0, + }); + new_tokens.push(IdentityTokenSyncInfo { + balance: *amount, + ..prior + }); + } + Some(None) => { + // Platform reported the token removed for + // this identity — drop the row. + } + None => { + // Batch failed for this token — keep the + // prior row to avoid clobbering on transient + // errors. + if let Some(prior) = prior_by_id.get(token_id).copied() { + new_tokens.push(prior); + } + } + } + } + + state.insert( + identity_id, + IdentityTokenSyncState { + identity_id, + last_sync_unix: now, + tokens: new_tokens, + }, + ); + } + } +} + +impl

std::fmt::Debug for IdentitySyncManager

+where + P: PlatformWalletPersistence + 'static, +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("IdentitySyncManager") + .field("is_running", &self.is_running()) + .field("is_syncing", &self.is_syncing()) + .field("interval_secs", &self.interval_secs.load(Ordering::Acquire)) + .field("last_sync_unix", &self.last_sync_unix_seconds()) + .finish() + } +} + +#[cfg(test)] +mod tests { + use super::*; + + use crate::changeset::{ClientStartState, PersistenceError, PlatformWalletChangeSet}; + + /// Test-only persister that swallows every `store` call and + /// records nothing. Lifecycle / registry tests don't need the + /// real persistence pipeline; they just need a handle that + /// satisfies the `Arc` constructor + /// parameter. + struct NoopPersister; + + impl PlatformWalletPersistence for NoopPersister { + fn store( + &self, + _wallet_id: WalletId, + _changeset: PlatformWalletChangeSet, + ) -> Result<(), PersistenceError> { + Ok(()) + } + + fn flush(&self, _wallet_id: WalletId) -> Result<(), PersistenceError> { + Ok(()) + } + + fn load(&self) -> Result { + Ok(ClientStartState::default()) + } + } + + /// Build a manager wired to a no-op persister. The SDK is + /// constructed via `SdkBuilder::new_mock` so we don't need a + /// running runtime for the registry/lifecycle tests below; none + /// of them exercise the network path. + fn make_manager() -> Arc> { + let sdk = Arc::new(dash_sdk::SdkBuilder::new_mock().build().expect("mock sdk")); + let persister = Arc::new(NoopPersister); + Arc::new(IdentitySyncManager::new(sdk, persister)) + } + + /// `register_identity` populates a row with zero-balance + /// placeholders for each token, and `state_for_identity` returns + /// the cloned row. Validates the read API the FFI snapshot path + /// depends on. + #[tokio::test] + async fn register_and_read_identity_state() { + let mgr = make_manager(); + let id_a = Identifier::from([1u8; 32]); + let id_b = Identifier::from([2u8; 32]); + let token_x = Identifier::from([10u8; 32]); + let token_y = Identifier::from([11u8; 32]); + + mgr.register_identity(id_a, [token_x, token_y]).await; + mgr.register_identity(id_b, [token_x]).await; + + let row_a = mgr.state_for_identity(&id_a).await.unwrap(); + assert_eq!(row_a.identity_id, id_a); + assert_eq!(row_a.last_sync_unix, 0); + assert_eq!(row_a.tokens.len(), 2); + assert!(row_a.tokens.iter().all(|t| t.balance == 0)); + assert!(row_a + .tokens + .iter() + .all(|t| t.contract_id == Identifier::default())); + assert!(row_a.tokens.iter().all(|t| t.identity_contract_nonce == 0)); + + // last_sync_unix_for_identity reports None until a pass + // completes (placeholder rows have last_sync_unix == 0). + assert_eq!(mgr.last_sync_unix_for_identity(&id_a).await, None); + + // Unknown identity → None on every read API. + let unknown = Identifier::from([99u8; 32]); + assert!(mgr.state_for_identity(&unknown).await.is_none()); + assert_eq!(mgr.last_sync_unix_for_identity(&unknown).await, None); + + // all_state returns both rows. + let all = mgr.all_state().await; + assert_eq!(all.len(), 2); + assert!(all.contains_key(&id_a)); + assert!(all.contains_key(&id_b)); + } + + /// `register_identity` is idempotent: a second call replaces the + /// row, including its watched-token set, and resets + /// `last_sync_unix` to 0. + #[tokio::test] + async fn register_identity_is_idempotent() { + let mgr = make_manager(); + let id_a = Identifier::from([1u8; 32]); + let token_x = Identifier::from([10u8; 32]); + let token_y = Identifier::from([11u8; 32]); + let token_z = Identifier::from([12u8; 32]); + + mgr.register_identity(id_a, [token_x, token_y]).await; + // Re-register with a different token set. + mgr.register_identity(id_a, [token_z]).await; + + let row = mgr.state_for_identity(&id_a).await.unwrap(); + assert_eq!(row.tokens.len(), 1); + assert_eq!(row.tokens[0].token_id, token_z); + assert_eq!(row.last_sync_unix, 0); + } + + /// `unregister_identity` drops the row; calling it again is a + /// no-op rather than an error. + #[tokio::test] + async fn unregister_identity_is_idempotent() { + let mgr = make_manager(); + let id_a = Identifier::from([1u8; 32]); + let token_x = Identifier::from([10u8; 32]); + + mgr.register_identity(id_a, [token_x]).await; + assert!(mgr.state_for_identity(&id_a).await.is_some()); + + mgr.unregister_identity(&id_a).await; + assert!(mgr.state_for_identity(&id_a).await.is_none()); + + // Idempotent: calling again on an unknown identity must not panic. + mgr.unregister_identity(&id_a).await; + mgr.unregister_identity(&Identifier::from([99u8; 32])).await; + } + + /// `set_interval` clamps to >=1s and is read back via `interval`. + /// Default interval matches the documented constant. Pinned so + /// future tuning surfaces in the test suite. + #[tokio::test] + async fn interval_round_trip() { + let mgr = make_manager(); + + assert_eq!( + mgr.interval(), + Duration::from_secs(DEFAULT_SYNC_INTERVAL_SECS) + ); + + mgr.set_interval(Duration::from_secs(0)); + assert_eq!(mgr.interval(), Duration::from_secs(1)); + + mgr.set_interval(Duration::from_secs(120)); + assert_eq!(mgr.interval(), Duration::from_secs(120)); + } + + /// Round-trip: register → read → update_watched_tokens → read. + /// `update_watched_tokens` preserves the rows for tokens still in + /// the new set, drops removed ones, and inserts placeholders for + /// added ones. It is a no-op on an unknown identity. + #[tokio::test] + async fn update_watched_tokens_round_trip() { + let mgr = make_manager(); + let id_a = Identifier::from([1u8; 32]); + let token_x = Identifier::from([10u8; 32]); + let token_y = Identifier::from([11u8; 32]); + let token_z = Identifier::from([12u8; 32]); + + mgr.register_identity(id_a, [token_x, token_y]).await; + + // Mutate the row in place to simulate a populated balance for + // token_x — we want to verify update_watched_tokens preserves + // it across the swap. + { + let mut state = mgr.state.write().await; + let row = state.get_mut(&id_a).unwrap(); + for info in row.tokens.iter_mut() { + if info.token_id == token_x { + info.balance = 12_345; + info.identity_contract_nonce = 7; + } + } + } + + // New set: keep token_x, drop token_y, add token_z. + mgr.update_watched_tokens(id_a, [token_x, token_z]).await; + + let row = mgr.state_for_identity(&id_a).await.unwrap(); + assert_eq!(row.tokens.len(), 2); + + let by_id: BTreeMap = + row.tokens.iter().map(|t| (t.token_id, *t)).collect(); + // Preserved. + let kept = by_id.get(&token_x).unwrap(); + assert_eq!(kept.balance, 12_345); + assert_eq!(kept.identity_contract_nonce, 7); + // Dropped. + assert!(!by_id.contains_key(&token_y)); + // Added with placeholders. + let added = by_id.get(&token_z).unwrap(); + assert_eq!(added.balance, 0); + assert_eq!(added.identity_contract_nonce, 0); + assert_eq!(added.contract_id, Identifier::default()); + + // Unknown identity: no-op (does not promote to register). + let unknown = Identifier::from([99u8; 32]); + mgr.update_watched_tokens(unknown, [token_x]).await; + assert!(mgr.state_for_identity(&unknown).await.is_none()); + } +} diff --git a/packages/rs-platform-wallet/src/manager/load.rs b/packages/rs-platform-wallet/src/manager/load.rs index be3b73349b5..3ef8f610105 100644 --- a/packages/rs-platform-wallet/src/manager/load.rs +++ b/packages/rs-platform-wallet/src/manager/load.rs @@ -64,8 +64,6 @@ impl PlatformWalletManager

{ balance: Arc::clone(&balance), identity_manager: IdentityManager::from(identity_manager), tracked_asset_locks, - token_watched: BTreeMap::new(), - token_balances: BTreeMap::new(), }; let wallet_id = { @@ -87,7 +85,7 @@ impl PlatformWalletManager

{ } let broadcaster = Arc::new(crate::broadcaster::SpvBroadcaster::new(Arc::clone( - &self.spv, + &self.spv_manager, ))); let platform_wallet = PlatformWallet::new( Arc::clone(&self.sdk), diff --git a/packages/rs-platform-wallet/src/manager/mod.rs b/packages/rs-platform-wallet/src/manager/mod.rs index 446830cc99d..58e2f046661 100644 --- a/packages/rs-platform-wallet/src/manager/mod.rs +++ b/packages/rs-platform-wallet/src/manager/mod.rs @@ -1,7 +1,9 @@ //! Multi-wallet manager with SPV coordination. mod accessors; +pub mod identity_sync; mod load; +pub mod platform_address_sync; mod wallet_lifecycle; use std::sync::Arc; @@ -14,7 +16,8 @@ use key_wallet_manager::WalletManager; use crate::changeset::{spawn_wallet_event_adapter, PlatformWalletPersistence}; use crate::events::{PlatformEventHandler, PlatformEventManager}; -use crate::platform_address_sync::PlatformAddressSyncManager; +use crate::manager::identity_sync::IdentitySyncManager; +use crate::manager::platform_address_sync::PlatformAddressSyncManager; use crate::spv::SpvRuntime; use crate::wallet::asset_lock::LockNotifyHandler; use crate::wallet::core::BalanceUpdateHandler; @@ -36,10 +39,15 @@ pub struct PlatformWalletManager { pub(super) wallets: Arc>>>, /// Notified on InstantLock / ChainLock events for `AssetLockManager` waiters. pub(super) lock_notify: Arc, - pub(super) spv: Arc, + pub(super) spv_manager: Arc, /// Periodic platform-address (BLAST) balance sync coordinator. /// Not auto-started — call `start` after wallets are registered. - pub(super) platform_address_sync: Arc, + pub(super) platform_address_sync_manager: Arc, + /// Periodic per-identity token state sync coordinator. Refreshes + /// the per-(identity, token) balance cache on every registered + /// wallet. Not auto-started — call `start` after wallets are + /// registered. See [`IdentitySyncManager`]. + pub(super) identity_sync_manager: Arc>, pub(super) persister: Arc

, /// Cancellation token + join handle for the wallet-event adapter /// task. Held so [`shutdown`] can stop it cleanly when the manager @@ -59,24 +67,17 @@ impl PlatformWalletManager

{ persister: Arc

, app_handler: Arc, ) -> Self { - // `PlatformWallet` / `WalletPersister` and the new wallet-event - // adapter all consume `Arc`; - // coerce once here and pass clones along instead of re-erasing - // at every call site. - let dyn_persister: Arc = Arc::clone(&persister) as _; let wallet_manager = Arc::new(RwLock::new(WalletManager::new(sdk.network))); let wallets = Arc::new(RwLock::new(std::collections::BTreeMap::new())); let lock_notify = Arc::new(Notify::new()); // Spawn the wallet-event adapter that translates upstream // `WalletEvent`s into `CoreChangeSet`s and forwards them to - // the persister. Replaces the old `CorePersistenceBridge` - // pattern (upstream `WalletPersistence` callback was deleted - // in favour of an event bus — see rust-dashcore PR #696). + // the persister. let event_adapter_cancel = CancellationToken::new(); let event_adapter_join = spawn_wallet_event_adapter( Arc::clone(&wallet_manager), - Arc::clone(&dyn_persister), + Arc::clone(&persister), event_adapter_cancel.clone(), ); @@ -102,13 +103,18 @@ impl PlatformWalletManager

{ Arc::clone(&wallets), Arc::clone(&event_manager), )); + let identity_sync = Arc::new(IdentitySyncManager::new( + Arc::clone(&sdk), + Arc::clone(&persister), + )); Self { sdk, wallet_manager, wallets, lock_notify, - spv, - platform_address_sync, + spv_manager: spv, + platform_address_sync_manager: platform_address_sync, + identity_sync_manager: identity_sync, persister, event_adapter_cancel, event_adapter_join: tokio::sync::Mutex::new(Some(event_adapter_join)), diff --git a/packages/rs-platform-wallet/src/platform_address_sync.rs b/packages/rs-platform-wallet/src/manager/platform_address_sync.rs similarity index 100% rename from packages/rs-platform-wallet/src/platform_address_sync.rs rename to packages/rs-platform-wallet/src/manager/platform_address_sync.rs diff --git a/packages/rs-platform-wallet/src/manager/wallet_lifecycle.rs b/packages/rs-platform-wallet/src/manager/wallet_lifecycle.rs index bb1faeb0f04..09e3951b7c4 100644 --- a/packages/rs-platform-wallet/src/manager/wallet_lifecycle.rs +++ b/packages/rs-platform-wallet/src/manager/wallet_lifecycle.rs @@ -8,7 +8,10 @@ use key_wallet::wallet::managed_wallet_info::ManagedWalletInfo; use key_wallet::wallet::Wallet; use key_wallet::Network; -use crate::changeset::PlatformWalletPersistence; +use crate::changeset::{ + AccountAddressPoolEntry, AccountRegistrationEntry, PlatformWalletChangeSet, + PlatformWalletPersistence, WalletMetadataEntry, +}; use crate::error::PlatformWalletError; use crate::wallet::core::WalletBalance; use crate::wallet::platform_wallet::{PlatformWalletInfo, WalletId}; @@ -160,8 +163,6 @@ impl PlatformWalletManager

{ balance: Arc::clone(&balance), identity_manager: crate::wallet::identity::IdentityManager::new(), tracked_asset_locks: std::collections::BTreeMap::new(), - token_watched: std::collections::BTreeMap::new(), - token_balances: std::collections::BTreeMap::new(), }; // Insert into WalletManager. @@ -175,77 +176,76 @@ impl PlatformWalletManager

{ })? }; - // Emit metadata + per-account xpubs to the persister so the - // watch-only restore path has everything it needs on next - // launch. Failures are logged but don't abort wallet - // registration — the persister is a best-effort channel, not - // a source of truth in steady state. + // Emit metadata + per-account xpubs + per-pool address + // snapshots to the persister so the watch-only restore path + // has everything it needs on next launch. The whole + // registration round travels as a single + // [`PlatformWalletChangeSet`] through the canonical `store` + // entry point — backends (FFI, SQLite, in-memory) see one + // atomic round rather than three side-channel calls. + // + // Failures are logged but don't abort wallet registration — + // the persister is a best-effort channel, not a source of + // truth in steady state. // Birth height = SPV's confirmed header tip if SPV is running, // otherwise 0 (caller can bump it later when SPV catches up). // 0 means "scan from genesis", which is safe-correct for // fresh wallets. let birth_height: u32 = self - .spv + .spv_manager .sync_progress() .await .and_then(|p| p.headers().ok().map(|h| h.tip_height())) .unwrap_or(0); - if let Err(e) = - self.persister - .store_wallet_metadata(wallet_id, self.sdk.network, birth_height) - { - tracing::error!( - wallet_id = %hex::encode(wallet_id), - error = %e, - "failed to persist wallet metadata" - ); - } - for (account_type, account_xpub) in &account_specs { - if let Err(e) = self - .persister - .store_account(wallet_id, account_type, account_xpub) - { - tracing::error!( - wallet_id = %hex::encode(wallet_id), - account_type = ?account_type, - error = %e, - "failed to persist account xpub" - ); - } - } + let mut registration_changeset = PlatformWalletChangeSet { + wallet_metadata: Some(WalletMetadataEntry { + network: self.sdk.network, + birth_height, + }), + account_registrations: account_specs + .iter() + .map(|(account_type, account_xpub)| AccountRegistrationEntry { + account_type: *account_type, + account_xpub: *account_xpub, + }) + .collect(), + ..Default::default() + }; - // Emit the initial address pool contents per account. Every - // account type contributes at least one pool (external, or a - // single `Absent` pool for degenerate types); Standard + // Every account type contributes at least one pool (external, + // or a single `Absent` pool for degenerate types); Standard // accounts contribute two. Ordering within a pool is by - // derivation index via `BTreeMap::values`. + // derivation index via `BTreeMap::values`. Empty pools are + // dropped here so the FFI receiver can match the previous + // "skip empty pools" semantics without re-deciding it. for (account_type, pools) in &address_snapshots { for (pool_type, infos) in pools { if infos.is_empty() { continue; } - if let Err(e) = self.persister.store_account_addresses( - wallet_id, - account_type, - *pool_type, - infos, - ) { - tracing::error!( - wallet_id = %hex::encode(wallet_id), - account_type = ?account_type, - pool_type = ?pool_type, - error = %e, - "failed to persist account addresses" - ); - } + registration_changeset + .account_address_pools + .push(AccountAddressPoolEntry { + account_type: *account_type, + pool_type: *pool_type, + addresses: infos.clone(), + }); } } + if let Err(e) = self.persister.store(wallet_id, registration_changeset) { + tracing::error!( + wallet_id = %hex::encode(wallet_id), + error = %e, + "failed to persist wallet registration changeset" + ); + } + // Build the PlatformWallet handle. let broadcaster = Arc::new(crate::broadcaster::SpvBroadcaster::new(Arc::clone( - &self.spv, + &self.spv_manager, ))); let persister_dyn: Arc = Arc::clone(&self.persister) as _; @@ -297,6 +297,24 @@ impl PlatformWalletManager

{ wallets.insert(wallet_id, Arc::clone(&platform_wallet)); } + // Best-effort identity discovery. For a recovery flow (existing + // mnemonic re-typed by the user) this hydrates every identity + // the wallet had on Platform without the caller having to fire + // `discover` manually. For a fresh wallet the gap-limit miss + // loop bails out after a handful of empty queries (~seconds) + // and produces nothing — same end state, slightly slower than + // skipping. Failures here are logged but never block wallet + // registration: a sync hiccup or offline DAPI shouldn't lose + // the user the wallet they just imported. + if let Err(e) = platform_wallet.identity().sync().await { + tracing::warn!( + wallet_id = %hex::encode(wallet_id), + error = %e, + "Identity discovery failed during wallet registration; \ + callers can retry via PlatformWallet::identity().discover()" + ); + } + Ok(platform_wallet) } diff --git a/packages/rs-platform-wallet/src/wallet/apply.rs b/packages/rs-platform-wallet/src/wallet/apply.rs index 57b23d7b39c..3405b950283 100644 --- a/packages/rs-platform-wallet/src/wallet/apply.rs +++ b/packages/rs-platform-wallet/src/wallet/apply.rs @@ -34,7 +34,13 @@ //! cached balance map. //! 5. `cs.asset_locks` — insert / remove tracked locks (with the //! `AssetLockEntry` → `TrackedAssetLock` field rename). -//! 6. `cs.token_balances` — balance updates + watch / unwatch deltas. +//! 6. `cs.token_balances` — drained but **not replayed** here. The +//! canonical home of token-balance state is the +//! [`IdentitySyncManager`](crate::manager::identity_sync::IdentitySyncManager) +//! cache, which is rebuilt by the next sync pass; the FFI persister +//! surfaces upserts/tombstones to the Swift side via its own +//! callback. There is nothing on `PlatformWalletInfo` to apply +//! them onto. //! 7. `update_balance()` — recompute the cached `WalletBalance` from //! the now-restored UTXO set; the returned changeset is discarded. @@ -93,6 +99,15 @@ impl PlatformWalletInfo { token_balances, dashpay_profiles, dashpay_payments_overlay, + // Registration-round metadata / per-account specs / + // per-pool snapshots are persistence-only — the + // canonical in-memory wallet state is built up at + // creation time before this apply path ever runs. + // Drop them explicitly so future readers don't expect + // a replay hook here. + wallet_metadata: _, + account_registrations: _, + account_address_pools: _, } = cs; // 1. Core wallet state. In the new event-bus model, a @@ -283,31 +298,13 @@ impl PlatformWalletInfo { } } - // 6. Token balances + watch registry deltas. - if let Some(tok_cs) = token_balances { - for (key, balance) in tok_cs.balances { - self.token_balances.insert(key, balance); - } - for key in tok_cs.removed_balances { - self.token_balances.remove(&key); - } - for (identity_id, tokens) in tok_cs.watched { - self.token_watched - .entry(identity_id) - .or_default() - .extend(tokens); - } - for (identity_id, tokens) in tok_cs.unwatched { - if let Some(set) = self.token_watched.get_mut(&identity_id) { - for token in &tokens { - set.remove(token); - } - if set.is_empty() { - self.token_watched.remove(&identity_id); - } - } - } - } + // 6. Token balances. The persistent cache lives entirely on + // the FFI / Swift side now; the in-memory canonical balance + // state lives on `IdentitySyncManager`, which gets rebuilt + // by the next sync pass rather than replayed from a + // changeset. Drop the field explicitly so future readers + // don't expect a mutation hook here. + drop(token_balances); // 7. Recompute cached UI balance from the now-restored UTXO set. // `update_balance` returns its own changeset internally; we @@ -331,7 +328,7 @@ impl PlatformWalletInfo { #[cfg(test)] mod tests { use super::*; - use std::collections::{BTreeMap, BTreeSet}; + use std::collections::BTreeMap; use std::sync::Arc; use dashcore::OutPoint; @@ -371,8 +368,6 @@ mod tests { balance: std::sync::Arc::new(WalletBalance::new()), identity_manager: IdentityManager::new(), tracked_asset_locks: BTreeMap::new(), - token_watched: BTreeMap::new(), - token_balances: BTreeMap::new(), } } @@ -406,7 +401,6 @@ mod tests { info.apply_changeset(&mut wallet, cs).expect("apply"); assert!(info.identity_manager.is_empty()); assert!(info.tracked_asset_locks.is_empty()); - assert!(info.token_balances.is_empty()); } #[test] @@ -676,8 +670,16 @@ mod tests { assert!(!info.tracked_asset_locks.contains_key(&out_point)); } + /// Token-balance changesets are accepted by `apply_changeset` for + /// shape compatibility but are not replayed onto + /// `PlatformWalletInfo` (which no longer has token_balances / + /// token_watched fields). The canonical balance cache lives on + /// `IdentitySyncManager` and is rebuilt by the next sync pass; the + /// FFI persister surfaces the upserts/tombstones to the Swift side + /// directly. This test pins the no-replay contract: applying a + /// non-empty token-balance changeset must not error. #[test] - fn apply_token_unwatch_clears_set_and_removes_empty_identity() { + fn apply_token_balance_changeset_is_noop_on_info() { let mut wallet = build_test_wallet(); let mut info = empty_info(&wallet); @@ -686,28 +688,13 @@ mod tests { let mut tok_cs = TokenBalanceChangeSet::default(); tok_cs.balances.insert((identity, token), 999); - let mut watched = BTreeSet::new(); - watched.insert(token); - tok_cs.watched.insert(identity, watched); - let mut cs = PlatformWalletChangeSet::default(); - cs.token_balances = Some(tok_cs); - info.apply_changeset(&mut wallet, cs).expect("apply watch"); - assert_eq!(info.token_balances.get(&(identity, token)), Some(&999)); - assert!(info.token_watched.get(&identity).unwrap().contains(&token)); - - // Unwatch the token — set becomes empty, identity entry should - // be removed entirely. - let mut tok_cs = TokenBalanceChangeSet::default(); - let mut unwatched = BTreeSet::new(); - unwatched.insert(token); - tok_cs.unwatched.insert(identity, unwatched); tok_cs.removed_balances.insert((identity, token)); let mut cs = PlatformWalletChangeSet::default(); cs.token_balances = Some(tok_cs); - info.apply_changeset(&mut wallet, cs) - .expect("apply unwatch"); - assert!(!info.token_balances.contains_key(&(identity, token))); - assert!(!info.token_watched.contains_key(&identity)); + info.apply_changeset(&mut wallet, cs).expect("apply token"); + + // No assertion against `info` — the field is gone. The point + // of this test is the call must not error. } // ---------------------------------------------------------------------- @@ -1645,6 +1632,11 @@ mod tests { }, }); + // Token balance changesets are accepted for shape compat but + // no longer drive `PlatformWalletInfo` state — the manager + // owns the balance cache. Include one anyway to confirm the + // double-apply still works once the field has been replaced + // with a `drop`. let mut tok_cs = TokenBalanceChangeSet::default(); let token = Identifier::from([8u8; 32]); tok_cs.balances.insert((identity, token), 42); @@ -1669,6 +1661,5 @@ mod tests { account.address_credit_balance(&PlatformP2PKHAddress::new([42u8; 20])), 1_000 ); - assert_eq!(info.token_balances.get(&(identity, token)), Some(&42)); } } diff --git a/packages/rs-platform-wallet/src/wallet/identity/network/mod.rs b/packages/rs-platform-wallet/src/wallet/identity/network/mod.rs index bc75bdf1d37..bea74f87882 100644 --- a/packages/rs-platform-wallet/src/wallet/identity/network/mod.rs +++ b/packages/rs-platform-wallet/src/wallet/identity/network/mod.rs @@ -39,7 +39,8 @@ mod payments; mod profile; // Token state-transition operations (same `IdentityWallet` impl blocks). -// Bookkeeping (watch / sync / balance) stays on `TokenWallet`. +// Bookkeeping (watch / sync / balance) lives on +// `crate::manager::identity_sync::IdentitySyncManager`. mod tokens; pub use discovery::IdentityDiscoveryOptions; diff --git a/packages/rs-platform-wallet/src/wallet/mod.rs b/packages/rs-platform-wallet/src/wallet/mod.rs index 4e6d31635d3..a6d10726fc1 100644 --- a/packages/rs-platform-wallet/src/wallet/mod.rs +++ b/packages/rs-platform-wallet/src/wallet/mod.rs @@ -20,4 +20,3 @@ pub use platform_addresses::{ pub use platform_wallet::{ PlatformWallet, PlatformWalletInfo, WalletId, WalletStateReadGuard, WalletStateWriteGuard, }; -pub use tokens::TokenWallet; diff --git a/packages/rs-platform-wallet/src/wallet/platform_wallet.rs b/packages/rs-platform-wallet/src/wallet/platform_wallet.rs index 114fb291ec0..76610b8bca2 100644 --- a/packages/rs-platform-wallet/src/wallet/platform_wallet.rs +++ b/packages/rs-platform-wallet/src/wallet/platform_wallet.rs @@ -1,12 +1,10 @@ //! The main PlatformWallet struct combining core, identity (+DashPay), and platform sub-wallets. -use std::collections::{BTreeMap, BTreeSet}; +use std::collections::BTreeMap; use std::ops::{Deref, DerefMut}; use std::sync::Arc; use dashcore::OutPoint; -use dpp::balances::credits::TokenAmount; -use dpp::prelude::Identifier; use key_wallet::wallet::managed_wallet_info::ManagedWalletInfo; use key_wallet::wallet::Wallet; use key_wallet_manager::WalletManager; @@ -18,7 +16,6 @@ use super::core::{CoreWallet, WalletBalance}; use super::identity::{IdentityManager, IdentityWallet}; use super::persister::WalletPersister; use super::platform_addresses::PlatformAddressWallet; -use super::tokens::TokenWallet; use crate::broadcaster::SpvBroadcaster; use crate::changeset::{ ClientStartState, PersistenceError, PlatformWalletChangeSet, PlatformWalletPersistence, @@ -42,8 +39,6 @@ pub struct PlatformWalletInfo { pub balance: Arc, pub identity_manager: IdentityManager, pub tracked_asset_locks: BTreeMap, - pub token_watched: BTreeMap>, - pub token_balances: BTreeMap<(Identifier, Identifier), TokenAmount>, } /// A platform wallet that combines core UTXO functionality with identity management. @@ -69,7 +64,6 @@ pub struct PlatformWallet { pub(crate) core: CoreWallet, pub(crate) identity: IdentityWallet, pub(crate) platform: PlatformAddressWallet, - pub(crate) tokens: TokenWallet, /// Shared asset lock manager. pub(crate) asset_locks: Arc>, /// Per-wallet persistence handle. @@ -100,11 +94,6 @@ impl PlatformWallet { &self.platform } - /// Access the token wallet. - pub fn tokens(&self) -> &TokenWallet { - &self.tokens - } - /// Access the shared asset lock manager. pub fn asset_locks(&self) -> &Arc> { &self.asset_locks @@ -120,6 +109,14 @@ impl PlatformWallet { &self.sdk } + /// Clone the underlying `Arc` so callers (e.g. FFI + /// async blocks moved onto a worker runtime) can hold an + /// independently-owned SDK handle without keeping the + /// `PlatformWallet` borrow alive. + pub fn sdk_arc(&self) -> Arc { + Arc::clone(&self.sdk) + } + /// Get a reference to the shared wallet manager lock. pub fn wallet_manager(&self) -> &Arc>> { &self.wallet_manager @@ -263,12 +260,6 @@ impl PlatformWallet { wallet_id, wallet_persister.clone(), ); - let tokens = TokenWallet::new( - Arc::clone(&sdk), - Arc::clone(&wallet_manager), - wallet_id, - wallet_persister.clone(), - ); Self { wallet_id, @@ -277,7 +268,6 @@ impl PlatformWallet { core, identity, platform, - tokens, asset_locks, persister: wallet_persister, balance, @@ -401,7 +391,6 @@ impl Clone for PlatformWallet { core: self.core.clone(), identity: self.identity.clone(), platform: self.platform.clone(), - tokens: self.tokens.clone(), asset_locks: self.asset_locks.clone(), persister: self.persister.clone(), balance: self.balance.clone(), diff --git a/packages/rs-platform-wallet/src/wallet/platform_wallet_traits.rs b/packages/rs-platform-wallet/src/wallet/platform_wallet_traits.rs index f769637dea0..3b3b04d8464 100644 --- a/packages/rs-platform-wallet/src/wallet/platform_wallet_traits.rs +++ b/packages/rs-platform-wallet/src/wallet/platform_wallet_traits.rs @@ -37,8 +37,6 @@ impl WalletInfoInterface for PlatformWalletInfo { balance: std::sync::Arc::new(super::core::WalletBalance::new()), identity_manager: super::identity::IdentityManager::new(), tracked_asset_locks: std::collections::BTreeMap::new(), - token_watched: std::collections::BTreeMap::new(), - token_balances: std::collections::BTreeMap::new(), } } @@ -51,8 +49,6 @@ impl WalletInfoInterface for PlatformWalletInfo { balance: std::sync::Arc::new(super::core::WalletBalance::new()), identity_manager: super::identity::IdentityManager::new(), tracked_asset_locks: std::collections::BTreeMap::new(), - token_watched: std::collections::BTreeMap::new(), - token_balances: std::collections::BTreeMap::new(), } } diff --git a/packages/rs-platform-wallet/src/wallet/tokens/group_queries.rs b/packages/rs-platform-wallet/src/wallet/tokens/group_queries.rs index bff64e6ad59..228fa15de3c 100644 --- a/packages/rs-platform-wallet/src/wallet/tokens/group_queries.rs +++ b/packages/rs-platform-wallet/src/wallet/tokens/group_queries.rs @@ -3,11 +3,17 @@ //! These helpers wrap rs-sdk's `GroupActionsQuery` and //! `GroupActionSignersQuery` so the FFI layer can return typed, //! Swift-friendly entries instead of raw `dpp` enums. They live here -//! (next to `wallet.rs`) rather than in rs-sdk-ffi because querying -//! pending proposals only makes sense in the context of a wallet's -//! `Sdk` reference and an identity's role in a contract's groups — -//! see `swift-sdk/CLAUDE.md`'s "high-level operations go through -//! platform-wallet" rule. +//! (next to the rest of the token bookkeeping) rather than in +//! rs-sdk-ffi because querying pending proposals only makes sense in +//! the context of a wallet's `Sdk` reference and an identity's role +//! in a contract's groups — see `swift-sdk/CLAUDE.md`'s "high-level +//! operations go through platform-wallet" rule. +//! +//! Exposed as free functions taking `&Sdk` rather than methods on a +//! token wallet: these are read-only network queries that don't +//! touch any wallet bookkeeping (no balance cache, no watch list). +//! Folding them onto a wallet type would needlessly require a +//! wallet handle for an operation that only needs an SDK. use dpp::balances::credits::TokenAmount; use dpp::data_contract::{GroupContractPosition, TokenContractPosition}; @@ -21,7 +27,6 @@ use dash_sdk::platform::group_actions::{GroupActionSignersQuery, GroupActionsQue use dash_sdk::platform::FetchMany; use crate::error::PlatformWalletError; -use crate::wallet::tokens::TokenWallet; /// A pending or closed group-action proposal on a token contract, /// flattened for cross-language consumers. @@ -98,155 +103,149 @@ pub struct GroupActionSignerEntry { pub power: u32, } -impl TokenWallet { - /// Fetch group-action proposals on `(contract_id, group_position)` - /// filtered by `status`. Returns flat [`GroupActionEntry`] rows - /// rather than rs-sdk's nested `IndexMap>` - /// so the FFI shim doesn't need to special-case the `None` - /// (proof-says-it-existed-but-deleted) case for every field. - pub async fn pending_group_actions_external( - &self, - contract_id: Identifier, - group_contract_position: GroupContractPosition, - status: GroupActionStatus, - start_at_action_id: Option<(Identifier, bool)>, - limit: Option, - ) -> Result, PlatformWalletError> { - use dpp::group::action_event::GroupActionEvent; - use dpp::group::group_action::GroupAction; - use dpp::tokens::token_event::TokenEvent; - - let query = GroupActionsQuery { - contract_id, - group_contract_position, - status, - start_at_action_id, - limit, - }; - - let rows = GroupAction::fetch_many(&self.sdk, query) - .await - .map_err(|e| { - PlatformWalletError::TokenError(format!("Fetch group actions failed: {}", e)) - })?; - - let mut out = Vec::with_capacity(rows.len()); - for (action_id, maybe_action) in rows { - let Some(action) = maybe_action else { continue }; - let proposer = action.proposer_id(); - let token_position = action.token_contract_position(); - let GroupActionEvent::TokenEvent(token_event) = action.event().clone(); - - let params = match token_event { - TokenEvent::Mint(amount, recipient, note) => GroupActionParams::Mint { - amount, - recipient, - public_note: note, - }, - TokenEvent::Burn(amount, burn_from, note) => GroupActionParams::Burn { - amount, - burn_from, - public_note: note, - }, - TokenEvent::Freeze(target, note) => GroupActionParams::Freeze { - target, - public_note: note, - }, - TokenEvent::Unfreeze(target, note) => GroupActionParams::Unfreeze { +/// Fetch group-action proposals on `(contract_id, group_position)` +/// filtered by `status`. Returns flat [`GroupActionEntry`] rows +/// rather than rs-sdk's nested `IndexMap>` +/// so the FFI shim doesn't need to special-case the `None` +/// (proof-says-it-existed-but-deleted) case for every field. +pub async fn pending_group_actions_external( + sdk: &dash_sdk::Sdk, + contract_id: Identifier, + group_contract_position: GroupContractPosition, + status: GroupActionStatus, + start_at_action_id: Option<(Identifier, bool)>, + limit: Option, +) -> Result, PlatformWalletError> { + use dpp::group::action_event::GroupActionEvent; + use dpp::group::group_action::GroupAction; + use dpp::tokens::token_event::TokenEvent; + + let query = GroupActionsQuery { + contract_id, + group_contract_position, + status, + start_at_action_id, + limit, + }; + + let rows = GroupAction::fetch_many(sdk, query).await.map_err(|e| { + PlatformWalletError::TokenError(format!("Fetch group actions failed: {}", e)) + })?; + + let mut out = Vec::with_capacity(rows.len()); + for (action_id, maybe_action) in rows { + let Some(action) = maybe_action else { continue }; + let proposer = action.proposer_id(); + let token_position = action.token_contract_position(); + let GroupActionEvent::TokenEvent(token_event) = action.event().clone(); + + let params = match token_event { + TokenEvent::Mint(amount, recipient, note) => GroupActionParams::Mint { + amount, + recipient, + public_note: note, + }, + TokenEvent::Burn(amount, burn_from, note) => GroupActionParams::Burn { + amount, + burn_from, + public_note: note, + }, + TokenEvent::Freeze(target, note) => GroupActionParams::Freeze { + target, + public_note: note, + }, + TokenEvent::Unfreeze(target, note) => GroupActionParams::Unfreeze { + target, + public_note: note, + }, + TokenEvent::DestroyFrozenFunds(target, amount, note) => { + GroupActionParams::DestroyFrozenFunds { target, + amount, public_note: note, - }, - TokenEvent::DestroyFrozenFunds(target, amount, note) => { - GroupActionParams::DestroyFrozenFunds { - target, - amount, - public_note: note, - } } - TokenEvent::EmergencyAction(action, note) => GroupActionParams::EmergencyAction { - action, - public_note: note, - }, - TokenEvent::ChangePriceForDirectPurchase(schedule, note) => { - let price_per_token = match schedule { - Some( - dpp::tokens::token_pricing_schedule::TokenPricingSchedule::SinglePrice( - p, - ), - ) => Some(p), - // Tiered schedules aren't surfaced to the simple-form - // co-sign UI yet — expose them as Other so the row at - // least renders, and a future wave can replay them. - Some(_) => { - out.push(GroupActionEntry { - action_id, - proposer, - token_contract_position: token_position, - status, - params: GroupActionParams::Other { - name: "directPricingTiered".to_string(), - }, - }); - continue; - } - None => None, - }; - GroupActionParams::SetPrice { - price_per_token, - public_note: note, + } + TokenEvent::EmergencyAction(action, note) => GroupActionParams::EmergencyAction { + action, + public_note: note, + }, + TokenEvent::ChangePriceForDirectPurchase(schedule, note) => { + let price_per_token = match schedule { + Some( + dpp::tokens::token_pricing_schedule::TokenPricingSchedule::SinglePrice(p), + ) => Some(p), + // Tiered schedules aren't surfaced to the simple-form + // co-sign UI yet — expose them as Other so the row at + // least renders, and a future wave can replay them. + Some(_) => { + out.push(GroupActionEntry { + action_id, + proposer, + token_contract_position: token_position, + status, + params: GroupActionParams::Other { + name: "directPricingTiered".to_string(), + }, + }); + continue; } + None => None, + }; + GroupActionParams::SetPrice { + price_per_token, + public_note: note, } - TokenEvent::DirectPurchase(amount, total_cost) => { - GroupActionParams::DirectPurchase { amount, total_cost } - } - other => GroupActionParams::Other { - name: other.associated_document_type_name().to_string(), - }, - }; - - out.push(GroupActionEntry { - action_id, - proposer, - token_contract_position: token_position, - status, - params, - }); - } - - Ok(out) - } + } + TokenEvent::DirectPurchase(amount, total_cost) => { + GroupActionParams::DirectPurchase { amount, total_cost } + } + other => GroupActionParams::Other { + name: other.associated_document_type_name().to_string(), + }, + }; - /// Fetch the signers (identity id + power) currently signed onto - /// `action_id` on `(contract_id, group_position)`. Status mirrors - /// the [`pending_group_actions_external`] filter — Platform only - /// indexes signers under one status at a time. - pub async fn group_action_signers_external( - &self, - contract_id: Identifier, - group_contract_position: GroupContractPosition, - status: GroupActionStatus, - action_id: Identifier, - ) -> Result, PlatformWalletError> { - use dpp::data_contract::group::GroupMemberPower; - - let query = GroupActionSignersQuery { - contract_id, - group_contract_position, - status, + out.push(GroupActionEntry { action_id, - }; + proposer, + token_contract_position: token_position, + status, + params, + }); + } + + Ok(out) +} - let rows = GroupMemberPower::fetch_many(&self.sdk, query) - .await - .map_err(|e| { - PlatformWalletError::TokenError(format!("Fetch group action signers failed: {}", e)) - })?; - - let mut out = Vec::with_capacity(rows.len()); - for (identity_id, maybe_power) in rows { - let Some(power) = maybe_power else { continue }; - out.push(GroupActionSignerEntry { identity_id, power }); - } - Ok(out) +/// Fetch the signers (identity id + power) currently signed onto +/// `action_id` on `(contract_id, group_position)`. Status mirrors +/// the [`pending_group_actions_external`] filter — Platform only +/// indexes signers under one status at a time. +pub async fn group_action_signers_external( + sdk: &dash_sdk::Sdk, + contract_id: Identifier, + group_contract_position: GroupContractPosition, + status: GroupActionStatus, + action_id: Identifier, +) -> Result, PlatformWalletError> { + use dpp::data_contract::group::GroupMemberPower; + + let query = GroupActionSignersQuery { + contract_id, + group_contract_position, + status, + action_id, + }; + + let rows = GroupMemberPower::fetch_many(sdk, query) + .await + .map_err(|e| { + PlatformWalletError::TokenError(format!("Fetch group action signers failed: {}", e)) + })?; + + let mut out = Vec::with_capacity(rows.len()); + for (identity_id, maybe_power) in rows { + let Some(power) = maybe_power else { continue }; + out.push(GroupActionSignerEntry { identity_id, power }); } + Ok(out) } diff --git a/packages/rs-platform-wallet/src/wallet/tokens/mod.rs b/packages/rs-platform-wallet/src/wallet/tokens/mod.rs index 46bbbebd5d6..15fa51935fe 100644 --- a/packages/rs-platform-wallet/src/wallet/tokens/mod.rs +++ b/packages/rs-platform-wallet/src/wallet/tokens/mod.rs @@ -1,5 +1,19 @@ +//! Token-contract group-action discovery helpers. +//! +//! Per-(identity, token) balance bookkeeping lives on +//! [`crate::manager::identity_sync::IdentitySyncManager`] — there is +//! no `TokenWallet` anymore. Token *actions* (transfer / mint / burn +//! / freeze / unfreeze / claim / purchase / set-price / pause / +//! resume / destroy-frozen-funds / update-config) live on +//! [`IdentityWallet`](crate::wallet::identity::network::IdentityWallet) +//! since they're identity-as-actor operations. +//! +//! What stays in this module is the read-only group-action discovery +//! surface (free functions taking `&Sdk`). + mod group_queries; -mod wallet; -pub use group_queries::{GroupActionEntry, GroupActionParams, GroupActionSignerEntry}; -pub use wallet::TokenWallet; +pub use group_queries::{ + group_action_signers_external, pending_group_actions_external, GroupActionEntry, + GroupActionParams, GroupActionSignerEntry, +}; diff --git a/packages/rs-platform-wallet/src/wallet/tokens/wallet.rs b/packages/rs-platform-wallet/src/wallet/tokens/wallet.rs deleted file mode 100644 index 5b8ba807ab8..00000000000 --- a/packages/rs-platform-wallet/src/wallet/tokens/wallet.rs +++ /dev/null @@ -1,290 +0,0 @@ -//! Token wallet with per-identity registry-based balance tracking. -//! -//! Consumers register which tokens to watch per identity via -//! [`watch`](TokenWallet::watch). [`sync`](TokenWallet::sync) queries Platform -//! for balances of all watched identity+token pairs. -//! -//! Token *actions* (transfer / mint / burn / freeze / unfreeze / claim / -//! purchase / set-price / pause / resume / destroy-frozen-funds / -//! update-config) are identity-as-actor operations and live on -//! [`IdentityWallet`](crate::wallet::identity::network::IdentityWallet) -//! alongside the rest of the identity-lifecycle and DashPay surface. -//! What stays here is wallet-scoped bookkeeping only: the watch -//! registry, the per-identity balance cache, and the `sync` driver -//! that refreshes those balances from Platform. - -use std::collections::BTreeMap; -use std::sync::Arc; - -use dpp::balances::credits::TokenAmount; -use dpp::prelude::Identifier; -use tokio::sync::RwLock; - -use dash_sdk::platform::tokens::identity_token_balances::IdentityTokenBalancesQuery; -use dash_sdk::platform::FetchMany; - -use crate::changeset::{Merge, TokenBalanceChangeSet}; -use crate::error::PlatformWalletError; -use crate::wallet::platform_wallet::{PlatformWalletInfo, WalletId}; -use key_wallet_manager::WalletManager; - -/// Key for the balance cache and watch registry: (identity_id, token_id). -type IdentityTokenKey = (Identifier, Identifier); - -/// Token wallet providing per-identity token balance tracking. -/// -/// Tokens are watched per-identity via [`watch`](Self::watch) because Platform -/// has no "list all tokens for an identity" query — the caller must know which -/// token IDs each identity cares about. -#[derive(Clone)] -pub struct TokenWallet { - pub(crate) sdk: Arc, - /// The shared wallet manager lock for all mutable wallet state. - pub(crate) wallet_manager: Arc>>, - /// Identifies which wallet within the manager this sub-wallet operates on. - pub(crate) wallet_id: WalletId, - /// Per-wallet persistence handle for queuing changesets. - pub(crate) persister: crate::wallet::persister::WalletPersister, -} - -impl TokenWallet { - /// Create a new TokenWallet. - pub(crate) fn new( - sdk: Arc, - wallet_manager: Arc>>, - wallet_id: WalletId, - persister: crate::wallet::persister::WalletPersister, - ) -> Self { - Self { - sdk, - wallet_manager, - wallet_id, - persister, - } - } -} - -// --------------------------------------------------------------------------- -// Token registry (per-identity) -// --------------------------------------------------------------------------- - -impl TokenWallet { - /// Register a token for balance tracking on a specific identity. - /// - /// Persists the resulting changeset internally and returns `()`. - pub async fn watch(&self, identity_id: Identifier, token_id: Identifier) { - let mut wm = self.wallet_manager.write().await; - let mut cs = TokenBalanceChangeSet::default(); - if let Some(info) = wm.get_wallet_info_mut(&self.wallet_id) { - info.token_watched - .entry(identity_id) - .or_default() - .insert(token_id); - } - cs.watched.entry(identity_id).or_default().insert(token_id); - if let Err(e) = self.persister.store(cs.into()) { - tracing::error!("Failed to persist changeset: {}", e); - } - } - - /// Unregister a token from a specific identity and clear its cached balance. - /// - /// Persists the resulting changeset internally and returns `()`. - pub async fn unwatch(&self, identity_id: &Identifier, token_id: &Identifier) { - let mut wm = self.wallet_manager.write().await; - let mut cs = TokenBalanceChangeSet::default(); - if let Some(info) = wm.get_wallet_info_mut(&self.wallet_id) { - if let Some(tokens) = info.token_watched.get_mut(identity_id) { - tokens.remove(token_id); - if tokens.is_empty() { - info.token_watched.remove(identity_id); - } - } - info.token_balances.remove(&(*identity_id, *token_id)); - } - cs.unwatched - .entry(*identity_id) - .or_default() - .insert(*token_id); - cs.removed_balances.insert((*identity_id, *token_id)); - if let Err(e) = self.persister.store(cs.into()) { - tracing::error!("Failed to persist changeset: {}", e); - } - } - - /// Unregister all tokens for a specific identity and clear cached balances. - /// - /// Persists the resulting changeset internally and returns `()`. - pub async fn unwatch_identity(&self, identity_id: &Identifier) { - let mut wm = self.wallet_manager.write().await; - let mut cs = TokenBalanceChangeSet::default(); - if let Some(info) = wm.get_wallet_info_mut(&self.wallet_id) { - if let Some(tokens) = info.token_watched.remove(identity_id) { - cs.unwatched.insert(*identity_id, tokens); - } - let to_remove: Vec<_> = info - .token_balances - .keys() - .filter(|(iid, _)| iid == identity_id) - .copied() - .collect(); - for key in &to_remove { - info.token_balances.remove(key); - cs.removed_balances.insert(*key); - } - } - if let Err(e) = self.persister.store(cs.into()) { - tracing::error!("Failed to persist changeset: {}", e); - } - } - - /// Get the watched token IDs for a specific identity. - pub async fn watched_for(&self, identity_id: &Identifier) -> Vec { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .and_then(|info| info.token_watched.get(identity_id)) - .map(|tokens| tokens.iter().copied().collect()) - .unwrap_or_default() - } - - /// Get all watched (identity_id, token_id) pairs. - pub async fn watched(&self) -> Vec { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .map(|info| { - info.token_watched - .iter() - .flat_map(|(iid, tokens)| tokens.iter().map(move |tid| (*iid, *tid))) - .collect() - }) - .unwrap_or_default() - } -} - -// --------------------------------------------------------------------------- -// Sync -// --------------------------------------------------------------------------- - -impl TokenWallet { - /// Sync balances for all watched identity+token pairs. - /// - /// Queries Platform per identity, fetching only the tokens that identity - /// is watching. Updates the local cache and persists the resulting - /// changeset internally. Returns `()` on success. - pub async fn sync(&self) -> Result<(), PlatformWalletError> { - // Snapshot the watched tokens while holding the lock briefly. - let snapshot: BTreeMap> = { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .map(|info| { - info.token_watched - .iter() - .map(|(iid, tokens)| (*iid, tokens.iter().copied().collect())) - .collect() - }) - .unwrap_or_default() - }; - - let mut cs = TokenBalanceChangeSet::default(); - if snapshot.is_empty() { - return Ok(()); - } - - for (identity_id, token_ids) in &snapshot { - if token_ids.is_empty() { - continue; - } - - let query = IdentityTokenBalancesQuery { - identity_id: *identity_id, - token_ids: token_ids.clone(), - }; - - // No locks held during the network call. - let result: dash_sdk::platform::tokens::identity_token_balances::IdentityTokenBalances = - TokenAmount::fetch_many(&self.sdk, query) - .await - .map_err(|e| { - PlatformWalletError::TokenError(format!( - "Failed to fetch token balances for identity {}: {}", - identity_id, e - )) - })?; - - let mut wm = self.wallet_manager.write().await; - if let Some(info) = wm.get_wallet_info_mut(&self.wallet_id) { - for (token_id, maybe_balance) in result.iter() { - let key = (*identity_id, *token_id); - match maybe_balance { - Some(amount) => { - info.token_balances.insert(key, *amount); - cs.balances.insert(key, *amount); - } - None => { - info.token_balances.remove(&key); - cs.removed_balances.insert(key); - } - } - } - } - } - - if !cs.is_empty() { - if let Err(e) = self.persister.store(cs.into()) { - tracing::error!("Failed to persist changeset: {}", e); - } - } - - Ok(()) - } -} - -// --------------------------------------------------------------------------- -// Balance queries (from cache) -// --------------------------------------------------------------------------- - -impl TokenWallet { - /// Get the cached balance for a specific identity and token. - pub async fn balance( - &self, - identity_id: &Identifier, - token_id: &Identifier, - ) -> Option { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .and_then(|info| info.token_balances.get(&(*identity_id, *token_id)).copied()) - } - - /// Get all cached token balances for an identity. - pub async fn balances_for_identity( - &self, - identity_id: &Identifier, - ) -> BTreeMap { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .map(|info| { - info.token_balances - .iter() - .filter(|((iid, _), _)| iid == identity_id) - .map(|((_, tid), &amount)| (*tid, amount)) - .collect() - }) - .unwrap_or_default() - } - - /// Get all cached balances as (identity_id, token_id) -> amount. - pub async fn all_balances(&self) -> BTreeMap { - let wm = self.wallet_manager.read().await; - wm.get_wallet_info(&self.wallet_id) - .map(|info| info.token_balances.clone()) - .unwrap_or_default() - } -} - -impl std::fmt::Debug for TokenWallet { - fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { - f.debug_struct("TokenWallet") - .field("network", &self.sdk.network) - .finish() - } -} diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/DashModelContainer.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/DashModelContainer.swift index a4e7551ad4b..4bed6b600af 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/DashModelContainer.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/DashModelContainer.swift @@ -7,6 +7,9 @@ public enum DashModelContainer { public static var modelTypes: [any PersistentModel.Type] { [ PersistentIdentity.self, + PersistentDPNSName.self, + PersistentDashpayProfile.self, + PersistentDashpayContactRequest.self, PersistentDocument.self, PersistentDataContract.self, PersistentPublicKey.self, @@ -123,6 +126,41 @@ public enum DashMigrationPlan: SchemaMigrationPlan { /// watch-only state lives on the native `Wallet` / /// `ManagedAccount` (FFI-backed); persisting it on the SwiftData /// side was redundant and the persister never wrote it. +/// - `PersistentDPNSName` was added (cascade-owned by +/// `PersistentIdentity` via the new `dpnsNames` relationship) +/// so DPNS labels are persisted instead of recomputed on every +/// `IdentityDetailView` open. Existing dev stores predate the +/// row collection and rebuild on next sync; the changeset's +/// append-only merge policy populates the new rows from the +/// persister callback. +/// - `PersistentDashpayProfile` was added (cascade-owned by +/// `PersistentIdentity` via the new `dashpayProfile` optional +/// relationship). Mirrors `IdentityEntry::dashpay_profile` from +/// the FFI so DashPay profile fields (display name, public +/// message, avatar URL / hash / fingerprint, bio) are persisted +/// across launches instead of being refetched. Existing dev +/// stores predate the row and rebuild on next profile sync; the +/// persister upserts in place via +/// `PlatformWalletPersistenceHandler.upsertDashpayProfile`. +/// - `PersistentDashpayContactRequest` was added (cascade-owned by +/// `PersistentIdentity` via the new `contactRequests` collection). +/// Mirrors `ContactChangeSet::sent_requests` / +/// `incoming_requests` / `established` projected through the new +/// `on_persist_contacts_fn` FFI callback, with one row per +/// `(network, owner, contact, isOutgoing)` quad. Existing dev +/// stores predate the row collection and rebuild on next +/// DashPay contact sync. +/// - `PersistentAccount` gained `#Unique<…>([\.wallet, \.accountType, +/// \.accountIndex, \.userIdentityId, \.friendIdentityId])` plus +/// `@Attribute(.unique)` on `accountExtendedPubKeyBytes`. The +/// xpub field also flipped from `Data` to `Data?` so multiple +/// unhydrated rows (xpub not yet known) don't collide on the +/// UNIQUE constraint — SQL allows multiple `NULL`s. Together +/// these enforce "one row per account identity, one xpub per +/// account" at the database layer; pre-refactor the persister's +/// `applyAccountChangeset` was string-keyed on the legacy +/// `Debug`-formatted `account_type_name` and could grow +/// duplicate rows for the same logical account. /// Each of those is a destructive change to a unique-attribute /// column or to relationship topology, so any pre-existing dev /// store will fail to open and get rebuilt from scratch on next diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentAccount.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentAccount.swift index 05a08a3804e..8fc1666c37a 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentAccount.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentAccount.swift @@ -18,6 +18,26 @@ import SwiftData /// through addresses; nothing is denormalized on this side. @Model public final class PersistentAccount { + /// Compound uniqueness on the full account-identity tuple: + /// `(wallet, accountType, accountIndex, standardTag, + /// registrationIndex, keyClass, userIdentityId, + /// friendIdentityId)`. Mirrors the persister's match logic + /// exactly — the variant disambiguators (`standardTag` for + /// BIP44 vs BIP32, `registrationIndex` for top-ups, `keyClass` + /// for PlatformPayment) are part of the key so legitimate + /// sibling accounts can coexist (e.g. BIP44 #0 and BIP32 #0, + /// or multiple top-up accounts on the same identity). + #Unique([ + \.wallet, + \.accountType, + \.accountIndex, + \.standardTag, + \.registrationIndex, + \.keyClass, + \.userIdentityId, + \.friendIdentityId, + ]) + /// Account type identifier — matches the `AccountTypeTagFFI` /// discriminant from the Rust side (0 = Standard, 1 = CoinJoin, /// … 14 = PlatformPayment, 15 = IdentityAuthenticationEcdsa, @@ -53,11 +73,15 @@ public final class PersistentAccount { /// other variants. public var friendIdentityId: Data /// Bincode-encoded `ExtendedPubKey` for this account. Populated by - /// `on_persist_account_fn`, consumed by `on_load_wallet_list_fn` - /// to reconstruct a watch-only `Account` via `Account::from_xpub`. - /// Empty `Data` means "not yet persisted" — account cannot be - /// restored silently. - public var accountExtendedPubKeyBytes: Data + /// `on_persist_account_registrations_fn`, consumed by + /// `on_load_wallet_list_fn` to reconstruct a watch-only `Account` + /// via `Account::from_xpub`. `nil` means "not yet persisted" — + /// account cannot be restored silently. Unique because two + /// accounts can't legitimately share an xpub (would imply a key + /// reuse / derivation collision); SQL UNIQUE allows multiple + /// `nil` values, so freshly-inserted unhydrated rows don't + /// conflict. + @Attribute(.unique) public var accountExtendedPubKeyBytes: Data? /// Record timestamps. public var createdAt: Date public var lastUpdated: Date @@ -101,7 +125,7 @@ public final class PersistentAccount { self.keyClass = 0 self.userIdentityId = Data() self.friendIdentityId = Data() - self.accountExtendedPubKeyBytes = Data() + self.accountExtendedPubKeyBytes = nil self.createdAt = Date() self.lastUpdated = Date() self.coreAddresses = [] diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentCoreAddress.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentCoreAddress.swift index 14ad9540219..7f73d0d26fe 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentCoreAddress.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentCoreAddress.swift @@ -4,7 +4,7 @@ import SwiftData /// SwiftData model for a single on-chain address tracked in a wallet's /// address pool (external / internal / absent). /// -/// Populated by the Rust-side `on_persist_account_addresses_fn` +/// Populated by the Rust-side `on_persist_account_address_pools_fn` /// callback, which fires at wallet creation (initial gap-limit fill), /// on pool extension (`next_unused` past the current tip), and when /// SPV marks an address used. @@ -37,7 +37,7 @@ public final class PersistentCoreAddress { /// SPV height of the most recent transaction touching this address. public var lastSeenHeight: UInt32 /// Cached balance in duffs from `AddressInfo.balance`. Updated by - /// subsequent `on_persist_account_addresses_fn` pulses. + /// subsequent `on_persist_account_address_pools_fn` pulses. public var balance: UInt64 /// Record timestamps. public var createdAt: Date @@ -46,11 +46,14 @@ public final class PersistentCoreAddress { /// Parent account. public var account: PersistentAccount? - /// TXOs paid to this address. `.nullify` on delete so dropping - /// an address row (e.g. pool rebuild) doesn't take its - /// historical TXOs with it — `PersistentTxo.address` (the - /// Base58Check string) remains the authoritative identifier. - @Relationship(deleteRule: .nullify, inverse: \PersistentTxo.coreAddress) + /// TXOs paid to this address. Cascade-delete: dropping the + /// address row takes its TXOs with it. The address is the + /// canonical owning record — no meaningful render path for an + /// address-less TXO. Pool rebuilds therefore need to reuse + /// existing rows (the persister upserts by Base58Check string, + /// which it already does) rather than wholesale-replace, or + /// the historical TXO chain gets wiped. + @Relationship(deleteRule: .cascade, inverse: \PersistentTxo.coreAddress) public var txos: [PersistentTxo] = [] public init( diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDPNSName.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDPNSName.swift new file mode 100644 index 00000000000..aa3b72872b2 --- /dev/null +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDPNSName.swift @@ -0,0 +1,150 @@ +import Foundation +import SwiftData + +/// SwiftData row for one confirmed DPNS label owned by a +/// `PersistentIdentity`. Mirrors a single +/// `platform_wallet::DpnsNameInfo` after it travels across the FFI on +/// `IdentityEntryFFI.dpns_names` / `dpns_names_acquired_at`. +/// +/// Why a dedicated model and not a `[String]` column: identities can +/// hold multiple DPNS labels and SwiftUI views want to observe the +/// list reactively — `@Query` over a row collection beats a `[String]` +/// column that views can only read in bulk on `onAppear`. +/// +/// This is purely a label cache. The DPNS document's `normalizedLabel` +/// (homograph-safe form used for the uniqueness lookup) is NOT +/// persisted here — DPNS lookups go through the SDK / platform-wallet, +/// and the local cache only needs to render the display label. +@Model +public final class PersistentDPNSName { + /// Compound uniqueness on `(networkRaw, normalizedParentDomainName, + /// normalizedLabel)`. Mirrors the DPNS contract's `domain` + /// document index `parentNameAndLabel` + /// (`normalizedParentDomainName + normalizedLabel`, `unique: true`) + /// and adds the network scope so two networks don't collide in a + /// shared local store. A label is only unique within a domain + /// on a given chain. + #Unique([\.networkRaw, \.normalizedParentDomainName, \.normalizedLabel]) + + /// Network discriminant. `Int` mirror of `AppNetwork.rawValue` — + /// Foundation's predicate engine compares it directly without a + /// custom converter. Stays in sync with `identity.networkRaw` + /// via the init; identities don't migrate between networks. + public var networkRaw: Int + + /// Type-safe accessor over `networkRaw`. Falls back to `.testnet` + /// if the stored raw value drifts — matches + /// `PersistentIdentity.network`. + public var network: AppNetwork { + get { AppNetwork(rawValue: networkRaw) ?? .testnet } + set { networkRaw = newValue.rawValue } + } + + /// Display label — the original case-and-letters form the user + /// registered, e.g. "Alice". Maps to the DPNS document's + /// `label` property. + public var label: String + + /// Homograph-safe lowercase form of `label` used for lookups + /// (e.g. "Alice" → "a11ce"; `o`/`O`→`0`, `i`/`I`→`1`, + /// `l`/`L`→`1`, everything else lowercased). Maps to the DPNS + /// document's `normalizedLabel` property and participates in the + /// per-domain uniqueness above. Computed once on insert from + /// `label` via `Self.normalize(_:)`. + public var normalizedLabel: String + + /// Display parent domain — e.g. "dash". Maps to the DPNS + /// document's `parentDomainName` property. DPNS today only + /// supports the single top-level domain "dash", so the persister + /// stamps that as the default; the field exists so subdomain + /// support (when/if DPNS gains it) lands without a schema bump. + public var parentDomainName: String + + /// Homograph-safe form of `parentDomainName` used for lookups. + /// Maps to the DPNS document's `normalizedParentDomainName` + /// property and participates in the per-domain uniqueness above. + public var normalizedParentDomainName: String + + /// Unix-millis timestamp when the wallet first observed this + /// label belonging to the identity. Mirrors + /// `DpnsNameInfo.acquired_at`. `0` when unknown. + public var acquiredAt: UInt64 + + // MARK: - Relationships + + /// Owning identity. Cascade-deleted from the parent — losing the + /// identity row should drop its label cache too. The `inverse` + /// declaration on `PersistentIdentity.dpnsNames` is the source of + /// truth for this association. + /// + /// Non-optional: every DPNS-label row exists *because* of an + /// identity. The persister wires it at construction time + /// (before insert) so SwiftData's non-optional relationship + /// contract is honored. + public var identity: PersistentIdentity + + // MARK: - Timestamps + + public var createdAt: Date + public var lastUpdated: Date + + // MARK: - Initialization + + public init( + identity: PersistentIdentity, + label: String, + parentDomainName: String = "dash", + acquiredAt: UInt64 = 0 + ) { + self.identity = identity + self.networkRaw = identity.networkRaw + self.label = label + self.normalizedLabel = Self.normalize(label) + self.parentDomainName = parentDomainName + self.normalizedParentDomainName = Self.normalize(parentDomainName) + self.acquiredAt = acquiredAt + self.createdAt = Date() + self.lastUpdated = Date() + } +} + +// MARK: - Normalization + +extension PersistentDPNSName { + /// Homograph-safe lowercasing identical to the DPNS contract's + /// `normalizedLabel` rule (and to + /// `dash_sdk::platform::dpns_usernames::convert_to_homograph_safe_chars`): + /// `o`/`O`→`0`, `i`/`I`→`1`, `l`/`L`→`1`, every other character + /// ASCII-lowercased. Run on label and parent at insert time so the + /// persisted row matches what the platform stores in the DPNS + /// document. We mirror the rule on the Swift side (rather than + /// routing the bare label through `dash_sdk_dpns_normalize_username`) + /// to avoid an FFI hop per row — the rule is closed-form and + /// stable across releases. + public static func normalize(_ input: String) -> String { + String(input.map { c -> Character in + switch c { + case "o", "O": return "0" + case "i", "I": return "1" + case "l", "L": return "1" + default: return Character(c.lowercased()) + } + }) + } +} + +// MARK: - Queries + +extension PersistentDPNSName { + /// Predicate filtering all DPNS-label rows that belong to a + /// specific identity. Traverses the `identity` relationship to + /// match its `identityId` — safe because the relationship is + /// non-optional and SwiftData's predicate engine handles + /// non-optional one-hop traversal cleanly. + public static func predicate(identityId: Data) -> Predicate { + let target = identityId + return #Predicate { name in + name.identity.identityId == target + } + } +} diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayContactRequest.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayContactRequest.swift new file mode 100644 index 00000000000..7e8679f1374 --- /dev/null +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayContactRequest.swift @@ -0,0 +1,174 @@ +import Foundation +import SwiftData + +/// SwiftData row for one DashPay `contactRequest` document — the +/// directional, encrypted-key payload the sender publishes when +/// initiating contact with another identity. Populated by the +/// platform-wallet persister callback whenever a `ContactChangeSet` +/// rides on the FFI changeset (`on_persist_contacts_fn`). +/// +/// One row per `(network, owner, contact, isOutgoing)` quad. The +/// outgoing and incoming directions for the same `(owner, contact)` +/// pair coexist as **distinct rows** because the encrypted payload +/// differs per direction (each side's `encryptedPublicKey` is sealed +/// to the other party's identity key), so the unique constraint +/// includes the direction bit. +/// +/// Cascade-deleted from `PersistentIdentity.contactRequests` — losing +/// the owner identity drops every contact-request row that named it +/// as `owner`. +@Model +public final class PersistentDashpayContactRequest { + /// Compound uniqueness on `(networkRaw, ownerIdentityId, + /// contactIdentityId, isOutgoing)`. Mirrors the per-direction + /// keying the Rust changeset uses on + /// `ContactChangeSet::sent_requests` / + /// `incoming_requests`, scoped by network so two networks don't + /// collide in a shared local store. + #Unique([ + \.networkRaw, \.ownerIdentityId, \.contactIdentityId, \.isOutgoing + ]) + + /// Network discriminant. `Int` mirror of `AppNetwork.rawValue` — + /// Foundation's predicate engine compares it directly without a + /// custom converter. Kept in sync with `owner.networkRaw` by the + /// init. + public var networkRaw: Int + + /// Type-safe accessor over `networkRaw`. Falls back to `.testnet` + /// if the stored raw value drifts. + public var network: AppNetwork { + get { AppNetwork(rawValue: networkRaw) ?? .testnet } + set { networkRaw = newValue.rawValue } + } + + /// Owning (wallet-managed) identity's 32-byte id, denormalized so + /// `#Predicate` filters can match without a relationship traversal + /// through the optional `owner` join. Always equal to + /// `owner.identityId` — kept in sync by the persister. + public var ownerIdentityId: Data + + /// Other party's 32-byte identity id. For outgoing rows this is + /// the recipient (`ContactRequest::recipient_id`); for incoming + /// rows this is the sender (`ContactRequest::sender_id`). The + /// `isOutgoing` bit disambiguates which direction this row + /// represents. + public var contactIdentityId: Data + + /// Direction bit. `true` ⇒ owner sent this request to contact; + /// `false` ⇒ contact sent this request to owner. Same shape as + /// the Rust `ContactRequestFFI::is_outgoing` field. + public var isOutgoing: Bool + + // MARK: - Payload — round-trips `ContactRequest` verbatim + + /// `ContactRequest::sender_key_index` — index of the sender's + /// identity public key used for the ECDH that encrypted the + /// payload. + public var senderKeyIndex: UInt32 + + /// `ContactRequest::recipient_key_index`. + public var recipientKeyIndex: UInt32 + + /// `ContactRequest::account_reference` — DashPay account derivation + /// hint the sender encoded in the request. + public var accountReference: UInt32 + + /// `ContactRequest::encrypted_public_key` bytes. Always non-empty + /// — every contact-request document carries an encrypted key. + public var encryptedPublicKey: Data + + /// `ContactRequest::encrypted_account_label` bytes, when present. + /// `nil` mirrors the source `Option` being `None`. + public var encryptedAccountLabel: Data? + + /// `ContactRequest::auto_accept_proof` bytes, when present. `nil` + /// mirrors the source `Option` being `None`. + public var autoAcceptProof: Data? + + /// `ContactRequest::core_height_created_at` — the Core block + /// height at which the request landed on Platform. + public var coreHeightCreatedAt: UInt32 + + /// `ContactRequest::created_at` — Unix-millis timestamp the + /// request document was created. + public var createdAtMillis: UInt64 + + // MARK: - Relationships + + /// Owning identity — the wallet-managed identity this row's + /// `ownerIdentityId` denormalizes. Non-optional: every + /// contact-request row exists *because of* an owner identity. + /// Cascade-deleted from `PersistentIdentity.contactRequests`. + public var owner: PersistentIdentity + + // MARK: - Timestamps + + public var createdAt: Date + public var lastUpdated: Date + + // MARK: - Initialization + + public init( + owner: PersistentIdentity, + contactIdentityId: Data, + isOutgoing: Bool, + senderKeyIndex: UInt32, + recipientKeyIndex: UInt32, + accountReference: UInt32, + encryptedPublicKey: Data, + encryptedAccountLabel: Data? = nil, + autoAcceptProof: Data? = nil, + coreHeightCreatedAt: UInt32, + createdAtMillis: UInt64 + ) { + self.owner = owner + self.networkRaw = owner.networkRaw + self.ownerIdentityId = owner.identityId + self.contactIdentityId = contactIdentityId + self.isOutgoing = isOutgoing + self.senderKeyIndex = senderKeyIndex + self.recipientKeyIndex = recipientKeyIndex + self.accountReference = accountReference + self.encryptedPublicKey = encryptedPublicKey + self.encryptedAccountLabel = encryptedAccountLabel + self.autoAcceptProof = autoAcceptProof + self.coreHeightCreatedAt = coreHeightCreatedAt + self.createdAtMillis = createdAtMillis + self.createdAt = Date() + self.lastUpdated = Date() + } +} + +// MARK: - Queries + +extension PersistentDashpayContactRequest { + /// Predicate filtering all contact-request rows that belong to a + /// specific owner identity. Filters on the denormalized + /// `ownerIdentityId` scalar so SwiftData's predicate engine + /// doesn't have to traverse the `owner` relationship — same shape + /// as the dpns-name predicate. + public static func predicate( + ownerIdentityId: Data + ) -> Predicate { + let target = ownerIdentityId + return #Predicate { row in + row.ownerIdentityId == target + } + } + + /// Direction-scoped variant of [`predicate(ownerIdentityId:)`]. + /// Used by the DashPay views that show only outgoing requests + /// (sent), only incoming requests (received), or only established + /// contacts (read both directions and join on `contactIdentityId`). + public static func predicate( + ownerIdentityId: Data, + isOutgoing: Bool + ) -> Predicate { + let target = ownerIdentityId + let direction = isOutgoing + return #Predicate { row in + row.ownerIdentityId == target && row.isOutgoing == direction + } + } +} diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayProfile.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayProfile.swift new file mode 100644 index 00000000000..1d228be18fd --- /dev/null +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentDashpayProfile.swift @@ -0,0 +1,130 @@ +import Foundation +import SwiftData + +/// SwiftData row mirroring the DashPay `profile` document for one +/// `PersistentIdentity`. Populated by the platform-wallet persister +/// callback whenever an `IdentityEntry.dashpay_profile` rides on the +/// FFI changeset (`IdentityEntryFFI.dashpay_profile_present == true`). +/// +/// One row per (network, identity): the DashPay contract enforces a +/// single `profile` document per `ownerId` (its only `unique` index), +/// so we mirror that with a compound uniqueness on `(networkRaw, +/// identity)` rather than rebinding to `identityId` directly — the +/// `identity` relationship is non-optional and the same uniqueness +/// shape as `PersistentDPNSName`. +/// +/// Cascade-deleted from the parent `PersistentIdentity` via the +/// `dashpayProfile` relationship: dropping an identity drops its +/// profile cache. +@Model +public final class PersistentDashpayProfile { + /// Compound uniqueness on `(networkRaw, identity)`. Mirrors the + /// DashPay contract's per-`ownerId` uniqueness on the `profile` + /// document, scoped by network so two networks don't collide in a + /// shared local store. + #Unique([\.networkRaw, \.identity]) + + /// Network discriminant. `Int` mirror of `AppNetwork.rawValue` — + /// Foundation's predicate engine compares it directly without a + /// custom converter. Stays in sync with `identity.networkRaw` + /// (set by the init); identities don't migrate between networks. + public var networkRaw: Int + + /// Type-safe accessor over `networkRaw`. Falls back to `.testnet` + /// if the stored raw value drifts — matches + /// `PersistentIdentity.network`. + public var network: AppNetwork { + get { AppNetwork(rawValue: networkRaw) ?? .testnet } + set { networkRaw = newValue.rawValue } + } + + // MARK: - Profile fields + // + // All optional — every `dashpay.profile` document field is + // optional in the contract schema except the implicit + // `$ownerId`. We mirror that on the row so partial profiles + // (only an `avatarUrl` set, only a `displayName` set, etc.) + // round-trip without forcing placeholder values. + + /// `displayName` field on the DashPay `profile` document. Up to + /// 25 chars per the contract schema. + public var displayName: String? + + /// `publicMessage` field on the DashPay `profile` document. Up to + /// 140 chars per the contract schema. + public var publicMessage: String? + + /// `bio` field. Not part of the v3 DashPay contract today; the + /// FFI carries the slot for forwards-compat with future contract + /// revisions and the column is reserved here so adding it doesn't + /// trigger a destructive schema change. + public var bio: String? + + /// `avatarUrl` field. URL string the consumer is expected to + /// fetch + cache locally; the binary asset itself is never + /// persisted on this row. + public var avatarUrl: String? + + /// `avatarHash` field — 32-byte hash of the avatar binary, + /// stored alongside the URL so consumers can verify the fetched + /// asset matches what the profile author published. `nil` when + /// the underlying `avatar_hash` was `None`. + public var avatarHash: Data? + + /// `avatarFingerprint` field — 8-byte perceptual hash for + /// quick equality checks on cached avatars without rehashing the + /// full asset. `nil` when the underlying `avatar_fingerprint` + /// was `None`. + public var avatarFingerprint: Data? + + // MARK: - Relationships + + /// Owning identity. Non-optional — a profile only exists in the + /// context of an identity. Cascade-deleted from the parent's + /// `dashpayProfile` relationship; the persister wires this up at + /// construction time. + public var identity: PersistentIdentity + + // MARK: - Timestamps + + public var createdAt: Date + public var lastUpdated: Date + + // MARK: - Initialization + + public init( + identity: PersistentIdentity, + displayName: String? = nil, + publicMessage: String? = nil, + bio: String? = nil, + avatarUrl: String? = nil, + avatarHash: Data? = nil, + avatarFingerprint: Data? = nil + ) { + self.identity = identity + self.networkRaw = identity.networkRaw + self.displayName = displayName + self.publicMessage = publicMessage + self.bio = bio + self.avatarUrl = avatarUrl + self.avatarHash = avatarHash + self.avatarFingerprint = avatarFingerprint + self.createdAt = Date() + self.lastUpdated = Date() + } +} + +// MARK: - Queries + +extension PersistentDashpayProfile { + /// Predicate filtering all rows that belong to a specific + /// identity. Traverses the non-optional `identity` relationship + /// to match its `identityId` — same shape as + /// `PersistentDPNSName.predicate(identityId:)`. + public static func predicate(identityId: Data) -> Predicate { + let target = identityId + return #Predicate { profile in + profile.identity.identityId == target + } + } +} diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentIdentity.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentIdentity.swift index ff8b498244f..9f8da43d309 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentIdentity.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentIdentity.swift @@ -10,6 +10,12 @@ public final class PersistentIdentity { public var revision: Int64 public var isLocal: Bool public var alias: String? + /// User's chosen primary display label (the one rendered on + /// list rows and avatars). Populated only when the user selects a + /// main name from `mainDpnsName` selection or as the fallback set + /// during initial registration. The full label collection lives on + /// the `dpnsNames` relationship below; this scalar is just the + /// "show this one in the cell" hint. public var dpnsName: String? public var mainDpnsName: String? public var identityType: String @@ -73,6 +79,40 @@ public final class PersistentIdentity { @Relationship(deleteRule: .cascade, inverse: \PersistentDocument.ownerIdentity) public var documents: [PersistentDocument] @Relationship(deleteRule: .nullify) public var tokenBalances: [PersistentTokenBalance] + /// Confirmed DPNS labels owned by this identity. Cascade-deleted + /// from the parent — losing the identity row drops the label + /// cache too. Append-only on the write path: the changeset's + /// merge policy never removes labels (DPNS doesn't expose a + /// user-driven "delete name" today), so the persister callback + /// only inserts new rows, never removes them. Predicates filter + /// by the denormalized `PersistentDPNSName.identityId` column, + /// not through this collection — see + /// `PersistentDPNSName.predicate(identityId:)`. + @Relationship(deleteRule: .cascade, inverse: \PersistentDPNSName.identity) + public var dpnsNames: [PersistentDPNSName] = [] + + /// DashPay profile cache for this identity — at most one row per + /// (network, identity) per the contract's per-`ownerId` + /// uniqueness on the `profile` document. Cascade-deleted from the + /// parent. Optional because not every identity has published a + /// profile (and the FFI changeset's `dashpay_profile: None` + /// semantics mean "no update", not "delete" — the persister never + /// nils this out from a flush). Inserted / refreshed by + /// `PlatformWalletPersistenceHandler.upsertDashpayProfile(...)`. + @Relationship(deleteRule: .cascade, inverse: \PersistentDashpayProfile.identity) + public var dashpayProfile: PersistentDashpayProfile? + + /// DashPay contact-request rows owned by this identity (both + /// outgoing and incoming). Cascade-deleted from the parent. Same + /// query-by-denormalized-id pattern as `dpnsNames`: filters use + /// `PersistentDashpayContactRequest.predicate(ownerIdentityId:)` + /// rather than walking this collection from a SwiftUI view. + /// Append / overwrite / delete on the write path: the persister + /// callback applies upserts (per `(owner, contact, isOutgoing)`) + /// and tombstones (`removed_sent` / `removed_incoming`) directly. + @Relationship(deleteRule: .cascade, inverse: \PersistentDashpayContactRequest.owner) + public var contactRequests: [PersistentDashpayContactRequest] = [] + // Contracts in the local store that name this identity as their // owner. `.nullify` so deleting the identity leaves the contract // rows alive (with `ownerIdentity` nulled) — matches the user's @@ -115,6 +155,9 @@ public final class PersistentIdentity { self.publicKeys = [] self.documents = [] self.tokenBalances = [] + self.dpnsNames = [] + self.dashpayProfile = nil + self.contactRequests = [] self.ownedDataContracts = [] self.createdAt = Date() self.lastUpdated = Date() diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentPlatformAddress.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentPlatformAddress.swift index ab87e1c36b3..de5d5f4a825 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentPlatformAddress.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentPlatformAddress.swift @@ -5,8 +5,8 @@ import SwiftData /// /// Each record represents one HD-derived Platform Payment address, /// combining the derivation metadata (populated by the Rust -/// `on_persist_account_addresses_fn` callback at wallet creation / -/// pool extension) with the credit balance + nonce snapshot reported +/// `on_persist_account_address_pools_fn` callback at wallet creation +/// / pool extension) with the credit balance + nonce snapshot reported /// by the BLAST sync round. Records are upserted incrementally — /// address emits seed the row, balance emits refresh `balance` / /// `nonce` / `isUsed` / `last_seen_height`. diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/ManagedPlatformWallet.swift b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/ManagedPlatformWallet.swift index 578f54a5116..cac091d0a64 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/ManagedPlatformWallet.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/ManagedPlatformWallet.swift @@ -1986,24 +1986,19 @@ public struct InMemoryWalletSummary: Sendable { /// Number of asset locks tracked in /// `PlatformWalletInfo.tracked_asset_locks`. public let trackedAssetLocksCount: Int - /// Number of `(identity_id, token_id) -> amount` rows on - /// `PlatformWalletInfo.token_balances`. - public let tokenBalancesCount: Int public init( identitiesCount: Int, watchedCount: Int, lastScannedIndex: UInt32, primaryIdentityId: Identifier?, - trackedAssetLocksCount: Int, - tokenBalancesCount: Int + trackedAssetLocksCount: Int ) { self.identitiesCount = identitiesCount self.watchedCount = watchedCount self.lastScannedIndex = lastScannedIndex self.primaryIdentityId = primaryIdentityId self.trackedAssetLocksCount = trackedAssetLocksCount - self.tokenBalancesCount = tokenBalancesCount } } @@ -2049,8 +2044,7 @@ extension ManagedPlatformWallet { // Primary-identity selection no longer lives on the Rust // side; UI layer owns it now. primaryIdentityId: nil, - trackedAssetLocksCount: Int(ffi.tracked_asset_locks_count), - tokenBalancesCount: Int(ffi.token_balances_count) + trackedAssetLocksCount: Int(ffi.tracked_asset_locks_count) ) } diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerIdentitySync.swift b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerIdentitySync.swift new file mode 100644 index 00000000000..72ce4c8b3de --- /dev/null +++ b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerIdentitySync.swift @@ -0,0 +1,320 @@ +import Foundation + +/// One row of the per-(identity, token) sync cache held by the +/// Rust-side `IdentitySyncManager`. Mirrors the FFI +/// `IdentityTokenSyncInfoFFI` shape — `identity_id` is replicated on +/// every row so the whole-store snapshot stays a flat array. +public struct IdentityTokenSyncRow: Sendable { + public let identityId: Identifier + public let tokenId: Identifier + public let contractId: Identifier + public let balance: UInt64 + public let identityContractNonce: UInt64 + + init(ffi: IdentityTokenSyncInfoFFI) { + var identity = ffi.identity_id + self.identityId = withUnsafeBytes(of: &identity) { Data($0) } + var token = ffi.token_id + self.tokenId = withUnsafeBytes(of: &token) { Data($0) } + var contract = ffi.contract_id + self.contractId = withUnsafeBytes(of: &contract) { Data($0) } + self.balance = ffi.balance + self.identityContractNonce = ffi.identity_contract_nonce + } +} + +/// Snapshot of one identity's sync row (rows + last-sync timestamp) +/// returned by `identitySyncStateForIdentity`. +public struct IdentityTokenSyncSnapshot: Sendable { + public let rows: [IdentityTokenSyncRow] + public let lastSyncUnixSeconds: UInt64 + + public init(rows: [IdentityTokenSyncRow], lastSyncUnixSeconds: UInt64) { + self.rows = rows + self.lastSyncUnixSeconds = lastSyncUnixSeconds + } +} + +extension PlatformWalletManager { + /// Start the identity-token sync background loop. + public func startIdentityTokenSync() throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_start(handle, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Stop the identity-token sync background loop. + public func stopIdentityTokenSync() throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_stop(handle, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Whether the identity-token sync background loop is running. + public func isIdentityTokenSyncRunning() throws -> Bool { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var running = false + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_is_running(handle, &running, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + return running + } + + /// Whether an identity-token sync pass is currently in flight. + public func isIdentityTokenSyncing() throws -> Bool { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var syncing = false + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_is_syncing(handle, &syncing, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + return syncing + } + + /// Unix seconds of the last completed identity-token sync pass for + /// the given identity, or 0 if it has never been synced. + public func lastIdentityTokenSyncUnixSeconds(for identityId: Identifier) throws -> UInt64 { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + guard identityId.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + var lastSync: UInt64 = 0 + var error = PlatformWalletFFIError() + let result = identityId.withUnsafeBytes { idPtr -> PlatformWalletFFIResult in + platform_wallet_manager_identity_sync_last_sync_unix_seconds( + handle, + idPtr.bindMemory(to: UInt8.self).baseAddress, + &lastSync, + &error + ) + } + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + return lastSync + } + + /// Set the polling interval (clamped to >= 1 second on the Rust side). + public func setIdentityTokenSyncInterval(seconds: UInt64) throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_set_interval(handle, seconds, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Run one identity-token sync pass across every registered + /// identity. Synchronous from the FFI side — runs on a worker + /// detached `Task`. If a pass is already in flight, returns + /// without doing extra work. + public func syncIdentityTokensNow() async throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + let handle = self.handle + try await Task.detached(priority: .userInitiated) { + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_sync_now(handle, &error) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + }.value + } + + /// Add or replace the sync registry row for `identityId`. Each + /// entry in `tokenIds` becomes a watched-token row with + /// placeholder balance/contract/nonce until the next sync pass + /// populates real values. Idempotent — calling with the same + /// identity replaces the row. + public func registerIdentityForTokenSync( + identityId: Identifier, + tokenIds: [Identifier] + ) throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + guard identityId.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + var error = PlatformWalletFFIError() + // Flatten token ids into one contiguous 32*N buffer so the + // FFI can read them as back-to-back chunks. + var flat = Data(capacity: 32 * tokenIds.count) + for tid in tokenIds { + guard tid.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + flat.append(tid) + } + let result = identityId.withUnsafeBytes { idPtr -> PlatformWalletFFIResult in + flat.withUnsafeBytes { tokensPtr -> PlatformWalletFFIResult in + platform_wallet_manager_identity_sync_register_identity( + handle, + idPtr.bindMemory(to: UInt8.self).baseAddress, + tokensPtr.bindMemory(to: UInt8.self).baseAddress, + UInt(tokenIds.count), + &error + ) + } + } + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Remove `identityId` from the sync registry. Idempotent. + public func unregisterIdentityForTokenSync(identityId: Identifier) throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + guard identityId.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + var error = PlatformWalletFFIError() + let result = identityId.withUnsafeBytes { idPtr -> PlatformWalletFFIResult in + platform_wallet_manager_identity_sync_unregister_identity( + handle, + idPtr.bindMemory(to: UInt8.self).baseAddress, + &error + ) + } + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Replace the watched-token list on an already-registered + /// identity. No-op when the identity isn't registered (call + /// `registerIdentityForTokenSync` first if you want promotion). + public func updateWatchedTokensForTokenSync( + identityId: Identifier, + tokenIds: [Identifier] + ) throws { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + guard identityId.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + var error = PlatformWalletFFIError() + var flat = Data(capacity: 32 * tokenIds.count) + for tid in tokenIds { + guard tid.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + flat.append(tid) + } + let result = identityId.withUnsafeBytes { idPtr -> PlatformWalletFFIResult in + flat.withUnsafeBytes { tokensPtr -> PlatformWalletFFIResult in + platform_wallet_manager_identity_sync_update_watched_tokens( + handle, + idPtr.bindMemory(to: UInt8.self).baseAddress, + tokensPtr.bindMemory(to: UInt8.self).baseAddress, + UInt(tokenIds.count), + &error + ) + } + } + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + } + + /// Snapshot the per-identity token sync state for one identity. + /// Returns `nil` when the identity has no cached state. + public func identityTokenSyncState( + for identityId: Identifier + ) throws -> IdentityTokenSyncSnapshot? { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + guard identityId.count == 32 else { + throw PlatformWalletError.invalidIdentifier + } + + var rowsPtr: UnsafeMutablePointer? = nil + var rowsCount: UInt = 0 + var lastSync: UInt64 = 0 + var error = PlatformWalletFFIError() + let result = identityId.withUnsafeBytes { idPtr -> PlatformWalletFFIResult in + platform_wallet_manager_identity_sync_state_for_identity( + handle, + idPtr.bindMemory(to: UInt8.self).baseAddress, + &rowsPtr, + &rowsCount, + &lastSync, + &error + ) + } + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + + defer { + if let rowsPtr { + platform_wallet_manager_identity_sync_state_free(rowsPtr, rowsCount) + } + } + + guard let rowsPtr else { + return nil + } + let buffer = UnsafeBufferPointer(start: rowsPtr, count: Int(rowsCount)) + let rows = buffer.map { IdentityTokenSyncRow(ffi: $0) } + return IdentityTokenSyncSnapshot(rows: rows, lastSyncUnixSeconds: lastSync) + } + + /// Snapshot the per-identity token sync state for every cached + /// identity in one flat array. + public func allIdentityTokenSyncRows() throws -> [IdentityTokenSyncRow] { + guard isConfigured, handle != NULL_HANDLE else { + throw PlatformWalletError.invalidHandle + } + var rowsPtr: UnsafeMutablePointer? = nil + var rowsCount: UInt = 0 + var error = PlatformWalletFFIError() + let result = platform_wallet_manager_identity_sync_state_all( + handle, + &rowsPtr, + &rowsCount, + &error + ) + guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { + throw PlatformWalletError(result: result, error: error) + } + + defer { + if let rowsPtr { + platform_wallet_manager_identity_sync_state_free(rowsPtr, rowsCount) + } + } + + guard let rowsPtr else { + return [] + } + let buffer = UnsafeBufferPointer(start: rowsPtr, count: Int(rowsCount)) + return buffer.map { IdentityTokenSyncRow(ffi: $0) } + } +} diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift index b270c1a3fd8..96d984e682b 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift @@ -281,31 +281,65 @@ public class PlatformWalletPersistenceHandler { walletRecord: PersistentWallet, acc: AccountChangeSetFFI ) { - let typeName = acc.account_type_name.map { String(cString: $0) } ?? "Unknown" let accountIndex = acc.account_index + // Stable account-type discriminants from the FFI. Used as the + // upsert key so a load-path emit and a sync-path emit for the + // same account collapse onto a single row — the legacy + // `account_type_name` string was Rust's `Debug` output, which + // differs from the canonical name the load path emits ("BIP44 + // Account" vs "Standard { index: 0, … }") and made the + // string-keyed predicate produce duplicate rows. + // `AccountTypeTagFFI` / `StandardAccountTypeTagFFI` come over + // as plain `UInt8` aliases (cbindgen flat-enum projection). + let typeTag = UInt32(acc.type_tag) + let standardTag = UInt8(acc.standard_tag) + let registrationIndex = acc.registration_index + let keyClass = acc.key_class + let userIdentityId = withUnsafeBytes(of: acc.user_identity_id) { Data($0) } + let friendIdentityId = withUnsafeBytes(of: acc.friend_identity_id) { Data($0) } + let typeName = accountTypeName(for: acc.type_tag, standardTag: acc.standard_tag) - // Upsert account (keyed by wallet + typeName + accountIndex). + // Upsert keyed by the full account identity. We can't easily + // express the identity tuple in a #Predicate with local `Data` + // captures, so fetch by (walletId, accountType, accountIndex) + // and verify the richer fields in Swift — same pattern the + // load path uses for `applyAccountSpec`. let walletId = walletRecord.walletId let accountDescriptor = FetchDescriptor( predicate: #Predicate { $0.wallet.walletId == walletId - && $0.accountTypeName == typeName + && $0.accountType == typeTag && $0.accountIndex == accountIndex } ) + let existing = (try? backgroundContext.fetch(accountDescriptor)) ?? [] + let match = existing.first { row in + row.standardTag == standardTag + && row.registrationIndex == registrationIndex + && row.keyClass == keyClass + && row.userIdentityId == userIdentityId + && row.friendIdentityId == friendIdentityId + } let account: PersistentAccount - if let existing = try? backgroundContext.fetch(accountDescriptor).first { - account = existing + if let match = match { + account = match account.lastUpdated = Date() } else { account = PersistentAccount( wallet: walletRecord, - accountType: 0, + accountType: typeTag, accountIndex: accountIndex, accountTypeName: typeName ) backgroundContext.insert(account) } + // Refresh the variant-specific fields so the row stays in + // sync with the latest emit (matches the load-path apply). + account.standardTag = standardTag + account.registrationIndex = registrationIndex + account.keyClass = keyClass + account.userIdentityId = userIdentityId + account.friendIdentityId = friendIdentityId // Highest-used address pool indices. if acc.has_external_highest_used { @@ -479,22 +513,39 @@ public class PlatformWalletPersistenceHandler { } } - private func markUtxoSpent(_ op: OutPointFFI) { - let outpoint = PersistentTxo.makeOutpoint(txid: hashData(op.txid), vout: op.vout) + private func markUtxoSpent(_ entry: SpentOutPointFFI) { + let outpoint = PersistentTxo.makeOutpoint( + txid: hashData(entry.outpoint.txid), + vout: entry.outpoint.vout + ) let descriptor = FetchDescriptor( predicate: #Predicate { $0.outpoint == outpoint } ) - if let txo = try? backgroundContext.fetch(descriptor).first { - txo.isSpent = true - // The FFI's spent-utxo notification only carries the - // outpoint, not the spending tx — so we cannot populate - // `txo.spendingTransaction` here. `isSpent = true` with - // `spendingTransaction == nil` is the steady-state we - // reach for now; future work: have the FFI emit the - // spending txid alongside each spent outpoint and link - // them up here. - txo.lastUpdated = Date() + guard let txo = try? backgroundContext.fetch(descriptor).first else { + return + } + txo.isSpent = true + // Link the spending transaction. The FFI now carries + // `spending_txid` alongside the outpoint (the txid of the + // `TransactionRecord` whose inputs included this outpoint), + // so we can resolve the parent and set the relationship. + // If the spending tx hasn't landed in SwiftData yet (rare + // — same-flush ordering normally upserts the tx before + // its spent-outpoint emit) leave the relationship nil; the + // next flush carrying that tx triggers another upsert + // round and eventually catches up. + let spendingTxid = hashData(entry.spending_txid) + if !spendingTxid.isEmpty, + !spendingTxid.allSatisfy({ $0 == 0 }), + txo.spendingTransaction?.txid != spendingTxid { + let txDescriptor = FetchDescriptor( + predicate: #Predicate { $0.txid == spendingTxid } + ) + if let spendingTx = try? backgroundContext.fetch(txDescriptor).first { + txo.spendingTransaction = spendingTx + } } + txo.lastUpdated = Date() } private func markUtxoInstantLocked(_ op: OutPointFFI) { @@ -534,14 +585,15 @@ public class PlatformWalletPersistenceHandler { // Root xpub is redundant with `wallet_id` for identity / // verification; Rust-side will stop requiring it once the // upstream rust-dashcore PR lands. - cb.on_persist_account_fn = persistAccountCallback + cb.on_persist_account_registrations_fn = persistAccountRegistrationsCallback cb.on_load_wallet_list_fn = loadWalletListCallback cb.on_load_wallet_list_free_fn = loadWalletListFreeCallback cb.on_persist_wallet_metadata_fn = persistWalletMetadataCallback - cb.on_persist_account_addresses_fn = persistAccountAddressesCallback + cb.on_persist_account_address_pools_fn = persistAccountAddressPoolsCallback cb.on_persist_identities_fn = persistIdentitiesCallback cb.on_persist_identity_keys_fn = persistIdentityKeysCallback cb.on_persist_token_balances_fn = persistTokenBalancesCallback + cb.on_persist_contacts_fn = persistContactsCallback return cb } @@ -684,6 +736,42 @@ public class PlatformWalletPersistenceHandler { } row.lastUpdated = Date() + // Upsert the DPNS-label cache for this identity. + // + // The Rust changeset's merge policy is append-only + // (`IdentityChangeSet::merge` only adds labels not + // already present on the existing entry), so a label + // missing from this flush does NOT mean it was removed + // — we mirror that by inserting new rows but never + // deleting existing ones here. DPNS doesn't expose a + // user-driven "delete name" today; if/when it does, the + // removal must arrive via a separate signal so we know + // it's intentional. + // + // `acquiredAt` is informational on the existing row — + // we refresh it on upsert so a later sync that fills in + // the timestamp wins over an earlier `0` placeholder. + upsertDPNSNames( + identityRow: row, + names: entry.dpnsNames + ) + + // Upsert the DashPay profile cache for this identity. + // + // Gated on `entry.dashpayProfile != nil` — a `nil` + // snapshot mirrors the FFI's + // `dashpay_profile_present == false`, which the Rust + // `IdentityChangeSet::merge` policy treats as "no + // update" (NOT delete). DashPay doesn't expose a + // user-driven "delete profile" today; if it ever does, + // the removal must arrive via a separate signal so we + // know it's intentional. Match the dpns-name handling + // shape: a missing snapshot leaves any existing row + // intact. + if let profile = entry.dashpayProfile { + upsertDashpayProfile(identityRow: row, profile: profile) + } + // Attach the identity to its owning `PersistentWallet` // via the relationship. This is the sole wallet-side // association on the row — there is no denormalized @@ -722,6 +810,143 @@ public class PlatformWalletPersistenceHandler { } // onQueue } + /// Upsert a `PersistentDPNSName` row for every label the FFI + /// identity entry carried. Rows are keyed on + /// `(networkRaw, normalizedParentDomainName, normalizedLabel)`, + /// matching `PersistentDPNSName`'s + /// `#Unique<…>([\.networkRaw, \.normalizedParentDomainName, + /// \.normalizedLabel])` declaration — which itself mirrors the + /// DPNS contract's `parentNameAndLabel` unique index. If a label + /// transferred between identities on the same network the + /// existing row's `identity` is rebound to the current owner. + /// + /// The FFI `IdentityEntryFFI.dpns_names` array carries only the + /// display label today; the parent domain defaults to `"dash"` + /// (the only top-level DPNS domain on Dash Platform), and the + /// normalized forms are derived via + /// `PersistentDPNSName.normalize(_:)` on insert. If/when the FFI + /// is extended to carry the parent domain, this site's defaults + /// become the fallback path. + /// + /// Append-only at the per-identity level: existing rows whose + /// label is no longer in the FFI list survive (see the call-site + /// comment on `IdentityChangeSet::merge`'s policy). The function + /// only ever inserts or refreshes; it does NOT cascade-prune. + /// + /// Assumes it's already running on `serialQueue` — only called + /// from inside `persistIdentities`'s `onQueue` body. + private func upsertDPNSNames( + identityRow: PersistentIdentity, + names: [(label: String, acquiredAt: UInt64)] + ) { + if names.isEmpty { + return + } + + let networkRaw = identityRow.networkRaw + // DPNS today exposes only the "dash" top-level domain. If the + // FFI ever forwards a different parent, the model carries it + // through verbatim — for now we stamp the universal default. + let parentDomainName = "dash" + let normalizedParentDomainName = PersistentDPNSName.normalize(parentDomainName) + + for entry in names { + let normalizedLabel = PersistentDPNSName.normalize(entry.label) + let descriptor = FetchDescriptor( + predicate: #Predicate { + $0.networkRaw == networkRaw + && $0.normalizedParentDomainName == normalizedParentDomainName + && $0.normalizedLabel == normalizedLabel + } + ) + if let existing = try? backgroundContext.fetch(descriptor).first { + // Refresh the timestamp if the FFI now carries a + // non-zero value. Don't clobber a real timestamp + // with a `0` placeholder — `acquired_at` is sticky + // once set. + if entry.acquiredAt != 0 && existing.acquiredAt != entry.acquiredAt { + existing.acquiredAt = entry.acquiredAt + existing.lastUpdated = Date() + } + // Refresh the display label too — a later flush may + // carry a corrected casing for the same normalized + // form (e.g. originally synced as "alice" then + // re-synced as "Alice"). The normalized index column + // doesn't change, so the unique constraint holds. + if existing.label != entry.label { + existing.label = entry.label + existing.lastUpdated = Date() + } + // Rebind to the current owner if the label transferred + // between identities on this network. DPNS supports + // transfers, and the unique constraint is per-network, + // so the row stays but the owner pointer moves. + if existing.identity !== identityRow { + existing.identity = identityRow + existing.lastUpdated = Date() + } + } else { + let row = PersistentDPNSName( + identity: identityRow, + label: entry.label, + parentDomainName: parentDomainName, + acquiredAt: entry.acquiredAt + ) + backgroundContext.insert(row) + } + } + } + + /// Upsert the at-most-one `PersistentDashpayProfile` row for an + /// identity. Idempotent on repeated flushes: an existing row is + /// refreshed in place rather than replaced, so SwiftUI views + /// observing it via `@Query` see field-level updates rather than + /// row-replacement churn. + /// + /// The DashPay contract guarantees one `profile` document per + /// `ownerId`, so we never have to disambiguate multiple rows for + /// the same identity — `identityRow.dashpayProfile` is either + /// already present (refresh) or absent (insert). + /// + /// Runs on `serialQueue` — only called from inside + /// `persistIdentities`'s `onQueue` body. + private func upsertDashpayProfile( + identityRow: PersistentIdentity, + profile: DashpayProfileSnapshot + ) { + if let existing = identityRow.dashpayProfile { + // Field-level refresh. Every column is overwritten on + // every flush — the FFI snapshot is authoritative for + // the profile document's contents (the underlying + // `IdentityEntry::dashpay_profile` is a whole-document + // `Some(_)` payload, not a partial diff). Fields the + // sender omitted come through as `nil` here too, so + // setting them to nil mirrors the on-Platform state. + existing.displayName = profile.displayName + existing.bio = profile.bio + existing.publicMessage = profile.publicMessage + existing.avatarUrl = profile.avatarUrl + existing.avatarHash = profile.avatarHash + existing.avatarFingerprint = profile.avatarFingerprint + existing.lastUpdated = Date() + } else { + let row = PersistentDashpayProfile( + identity: identityRow, + displayName: profile.displayName, + publicMessage: profile.publicMessage, + bio: profile.bio, + avatarUrl: profile.avatarUrl, + avatarHash: profile.avatarHash, + avatarFingerprint: profile.avatarFingerprint + ) + backgroundContext.insert(row) + // SwiftData populates the inverse `dashpayProfile` + // pointer from the `inverse:` declaration on + // `PersistentIdentity.dashpayProfile`, so we don't need + // to assign `identityRow.dashpayProfile = row` here. + } + } + // MARK: - Identity keys persistence /// Upsert / remove rows from `PersistentPublicKey` in response to @@ -975,6 +1200,178 @@ public class PlatformWalletPersistenceHandler { let tokenId: Data } + // MARK: - DashPay contact-request persistence + + /// Apply a DashPay `ContactChangeSet` projection to SwiftData. + /// + /// Mapping: + /// - Each `upsert.ContactRequestFFI` becomes one row keyed by + /// `(networkRaw, ownerIdentityId, contactIdentityId, isOutgoing)` + /// on `PersistentDashpayContactRequest`. The Rust side projects + /// `ContactChangeSet::sent_requests` / `incoming_requests` / + /// `established` into this flat array (with `is_outgoing` + /// stamped per row), so the upsert path is direction-agnostic. + /// - Each `removedSent` row drops the matching outgoing row. + /// - Each `removedIncoming` row drops the matching incoming row. + /// + /// The owner identity is required to exist in SwiftData before + /// the row is inserted — the relationship is non-optional and + /// `networkRaw` is read off it. If a flush carries a contact + /// upsert for an owner identity Swift hasn't seen yet (race with + /// a first-time identity flush), the row is skipped; the next + /// flush will replay it after the identity row lands. In + /// practice the changeset is one round, so this only matters + /// for the very first identity registration where the contact + /// changeset and identity changeset arrive in the same store() + /// call — within a round, identities apply before contacts (see + /// the ordering in `FFIPersister::store`), so the lookup here + /// will normally succeed. + func persistContacts( + walletId: Data, + upserts: [ContactRequestSnapshot], + removedSent: [ContactRequestRemovalSnapshot], + removedIncoming: [ContactRequestRemovalSnapshot] + ) { + onQueue { + for entry in upserts { + let ownerId = entry.ownerIdentityId + let ownerDescriptor = FetchDescriptor( + predicate: #Predicate { $0.identityId == ownerId } + ) + guard let owner = try? backgroundContext.fetch(ownerDescriptor).first else { + // Owner identity hasn't landed yet. Within a + // single round identities apply before contacts, + // so we'd only hit this if the FFI changeset + // surfaces a contact for an identity that isn't + // managed by any wallet locally — there's no + // identity row to hang it off, and the contract's + // `ownerId` invariant means the row would be + // orphaned anyway. Skip silently; the next sync + // round will replay it once the owner row exists. + continue + } + + let networkRaw = owner.networkRaw + let contactId = entry.contactIdentityId + let isOutgoing = entry.isOutgoing + let descriptor = FetchDescriptor( + predicate: #Predicate { + $0.networkRaw == networkRaw + && $0.ownerIdentityId == ownerId + && $0.contactIdentityId == contactId + && $0.isOutgoing == isOutgoing + } + ) + if let existing = try? backgroundContext.fetch(descriptor).first { + // Refresh in place — every column is overwritten + // because the FFI snapshot is authoritative for + // the underlying `ContactRequest` document. This + // is also the path `established` rows take to + // promote a previously-pending row in place over + // its prior `sent_requests` / `incoming_requests` + // entry; the unique key is identical because the + // promotion doesn't change `(owner, contact, + // direction)`. + existing.senderKeyIndex = entry.senderKeyIndex + existing.recipientKeyIndex = entry.recipientKeyIndex + existing.accountReference = entry.accountReference + existing.encryptedPublicKey = entry.encryptedPublicKey + existing.encryptedAccountLabel = entry.encryptedAccountLabel + existing.autoAcceptProof = entry.autoAcceptProof + existing.coreHeightCreatedAt = entry.coreHeightCreatedAt + existing.createdAtMillis = entry.createdAtMillis + if existing.owner !== owner { + existing.owner = owner + } + existing.lastUpdated = Date() + } else { + let row = PersistentDashpayContactRequest( + owner: owner, + contactIdentityId: entry.contactIdentityId, + isOutgoing: entry.isOutgoing, + senderKeyIndex: entry.senderKeyIndex, + recipientKeyIndex: entry.recipientKeyIndex, + accountReference: entry.accountReference, + encryptedPublicKey: entry.encryptedPublicKey, + encryptedAccountLabel: entry.encryptedAccountLabel, + autoAcceptProof: entry.autoAcceptProof, + coreHeightCreatedAt: entry.coreHeightCreatedAt, + createdAtMillis: entry.createdAtMillis + ) + backgroundContext.insert(row) + } + } + + for tomb in removedSent { + deleteContactRow( + ownerId: tomb.ownerIdentityId, + contactId: tomb.contactIdentityId, + isOutgoing: true + ) + } + for tomb in removedIncoming { + deleteContactRow( + ownerId: tomb.ownerIdentityId, + contactId: tomb.contactIdentityId, + isOutgoing: false + ) + } + // No save() — bracketed by changesetBegin/End from the + // Rust store() round. + _ = walletId // reserved for future wallet-scope batching + } + } + + /// Delete the single `PersistentDashpayContactRequest` row matching + /// `(ownerIdentityId, contactIdentityId, isOutgoing)`. The fourth + /// uniqueness column (`networkRaw`) is implied by the owner — an + /// identity belongs to exactly one network — so we don't have to + /// fan out the predicate across networks. Silent on miss (no + /// existing row): the FFI changeset replays tombstones, and an + /// already-removed row is the success state. + /// + /// Assumes it's already running on `serialQueue`. + private func deleteContactRow(ownerId: Data, contactId: Data, isOutgoing: Bool) { + let direction = isOutgoing + let descriptor = FetchDescriptor( + predicate: #Predicate { + $0.ownerIdentityId == ownerId + && $0.contactIdentityId == contactId + && $0.isOutgoing == direction + } + ) + if let existing = try? backgroundContext.fetch(descriptor).first { + backgroundContext.delete(existing) + } + } + + /// Owned snapshot of a `ContactRequestFFI` row. Decouples the + /// lifetime of the encrypted-key buffers from the Rust-side + /// allocation: the callback copies them into Swift `Data` before + /// returning, so `free_contact_requests_ffi` runs cleanly. + struct ContactRequestSnapshot { + let ownerIdentityId: Data + let contactIdentityId: Data + let isOutgoing: Bool + let senderKeyIndex: UInt32 + let recipientKeyIndex: UInt32 + let accountReference: UInt32 + let encryptedPublicKey: Data + let encryptedAccountLabel: Data? + let autoAcceptProof: Data? + let coreHeightCreatedAt: UInt32 + let createdAtMillis: UInt64 + } + + /// Owned snapshot of a `ContactRequestRemovalFFI` row. Carries + /// just the `(owner, contact)` pair — the direction is implied + /// by which array (`removed_sent` vs `removed_incoming`) the + /// removal came from on the FFI side. + struct ContactRequestRemovalSnapshot { + let ownerIdentityId: Data + let contactIdentityId: Data + } + // MARK: - Identity private-key derivation /// Derive the 32-byte ECDSA scalar for an identity key from the @@ -1123,6 +1520,41 @@ public class PlatformWalletPersistenceHandler { let label: String? let status: UInt8 let walletId: Data? + /// Confirmed DPNS labels owned by this identity, paired with + /// their `acquired_at` Unix-millis timestamp (`0` when the + /// source `Option` was `None`). Mirrors the parallel + /// `dpns_names` / `dpns_names_acquired_at` arrays on + /// `IdentityEntryFFI`. Empty when the identity has no settled + /// labels. + let dpnsNames: [(label: String, acquiredAt: UInt64)] + /// DashPay profile snapshot — populated iff + /// `IdentityEntryFFI.dashpay_profile_present == true`. `nil` + /// means "no update for this flush", which mirrors the + /// changeset's `dashpay_profile: None` semantics on the Rust + /// side (NOT a delete signal). Inner fields are individually + /// optional because every DashPay profile field but the + /// implicit `$ownerId` is optional in the contract schema. + let dashpayProfile: DashpayProfileSnapshot? + } + + /// Owned snapshot of the `dashpay_profile_*` fields on + /// `IdentityEntryFFI`. Decouples the lifetime of every contained + /// `String` / `Data` from the FFI heap so the callback can + /// return immediately and the Rust side can run its free-loop. + struct DashpayProfileSnapshot { + let displayName: String? + let bio: String? + let publicMessage: String? + let avatarUrl: String? + /// 32-byte SHA-256 of the avatar binary (DIP-15 `avatarHash`). + /// `nil` when the source `avatar_hash_present == false` — + /// disambiguates "no hash" from "all-zero hash" since the + /// underlying byte array is zero-initialized either way. + let avatarHash: Data? + /// 8-byte DHash perceptual fingerprint (DIP-15 + /// `avatarFingerprint`). `nil` when the source + /// `avatar_fingerprint_present == false`. + let avatarFingerprint: Data? } /// Swift-side snapshot of `IdentityKeyEntryFFI` — public-key @@ -1211,6 +1643,25 @@ public class PlatformWalletPersistenceHandler { row.balance = entry.balance row.account = account row.lastUpdated = Date() + + // Backfill the `coreAddress` link on any TXOs that were + // persisted before this address row existed. The SPV + // pass can emit UTXOs for an address whose pool row + // hasn't landed yet; in that case `upsertUtxo` skipped + // the relationship and `record.coreAddress` stayed nil. + // Without this sweep the storage-explorer's "Address + // Row" field renders as "—" forever even though the + // address row now exists. Avoid the SwiftData + // optional-relationship-in-predicate gotcha by + // filtering nil-coreAddress in Swift after the fetch. + let txoBackfillDescriptor = FetchDescriptor( + predicate: #Predicate { $0.address == address } + ) + if let txosAtAddress = try? backgroundContext.fetch(txoBackfillDescriptor) { + for txo in txosAtAddress where txo.coreAddress == nil { + txo.coreAddress = row + } + } } try? backgroundContext.save() @@ -1511,7 +1962,7 @@ public class PlatformWalletPersistenceHandler { return (nil, 0) } let restorable = wallets.filter { wallet in - wallet.accounts.contains { !$0.accountExtendedPubKeyBytes.isEmpty } + wallet.accounts.contains { ($0.accountExtendedPubKeyBytes?.isEmpty == false) } } if restorable.isEmpty { return (nil, 0) @@ -1526,7 +1977,7 @@ public class PlatformWalletPersistenceHandler { for (i, w) in restorable.enumerated() { let sortedAccounts = w.accounts - .filter { !$0.accountExtendedPubKeyBytes.isEmpty } + .filter { ($0.accountExtendedPubKeyBytes?.isEmpty == false) } .sorted { ($0.accountType, $0.accountIndex, $0.registrationIndex, $0.keyClass) < ($1.accountType, $1.accountIndex, $1.registrationIndex, $1.keyClass) @@ -1537,7 +1988,8 @@ public class PlatformWalletPersistenceHandler { } else { let buf = UnsafeMutablePointer.allocate(capacity: sortedAccounts.count) for (j, acc) in sortedAccounts.enumerated() { - let xpub = acc.accountExtendedPubKeyBytes + // Filter above guarantees non-nil + non-empty. + let xpub = acc.accountExtendedPubKeyBytes ?? Data() let xpubBuffer = UnsafeMutablePointer.allocate(capacity: xpub.count) xpub.copyBytes(to: xpubBuffer, count: xpub.count) allocation.scalarBuffers.append((xpubBuffer, xpub.count)) @@ -1782,7 +2234,9 @@ public class PlatformWalletPersistenceHandler { return [] } return wallets - .filter { w in w.accounts.contains { !$0.accountExtendedPubKeyBytes.isEmpty } } + .filter { w in + w.accounts.contains { ($0.accountExtendedPubKeyBytes?.isEmpty == false) } + } .map { $0.walletId } } } @@ -2078,21 +2532,32 @@ private func persistSyncStateCallback( return 0 } -private func persistAccountCallback( +/// C shim for `on_persist_account_registrations_fn`. Walks the +/// Rust-owned `[AccountSpecFFI]` slice and writes one +/// `PersistentAccount` row per entry. Replaces the legacy +/// per-entry `on_persist_account_fn` — same shape per row, but +/// the round arrives as a single batched callback so the whole +/// registration round flushes through one `store(...)` cycle on +/// the Rust side. +private func persistAccountRegistrationsCallback( context: UnsafeMutableRawPointer?, walletIdPtr: UnsafePointer?, - specPtr: UnsafePointer? + specsPtr: UnsafePointer?, + count: UInt ) -> Int32 { guard let context = context, - let walletIdPtr = walletIdPtr, - let specPtr = specPtr else { + let walletIdPtr = walletIdPtr else { return 0 } let handler = Unmanaged .fromOpaque(context) .takeUnretainedValue() let walletId = Data(bytes: walletIdPtr, count: 32) - handler.persistAccount(walletId: walletId, spec: specPtr.pointee) + if count > 0, let specsPtr = specsPtr { + for i in 0..?, - specPtr: UnsafePointer?, - addressesPtr: UnsafePointer?, + poolsPtr: UnsafePointer?, count: UInt ) -> Int32 { guard let context = context, - let walletIdPtr = walletIdPtr, - let specPtr = specPtr else { + let walletIdPtr = walletIdPtr else { return 0 } let handler = Unmanaged .fromOpaque(context) .takeUnretainedValue() let walletId = Data(bytes: walletIdPtr, count: 32) + guard count > 0, let poolsPtr = poolsPtr else { + return 0 + } - let spec = specPtr.pointee - var userIdentityId = Data(count: 32) - withUnsafeBytes(of: spec.user_identity_id) { src in - userIdentityId.withUnsafeMutableBytes { dst in dst.copyMemory(from: src) } - } - var friendIdentityId = Data(count: 32) - withUnsafeBytes(of: spec.friend_identity_id) { src in - friendIdentityId.withUnsafeMutableBytes { dst in dst.copyMemory(from: src) } - } - let key = PlatformWalletPersistenceHandler.AccountLookupKey( - typeTag: UInt32(spec.type_tag), - index: spec.index, - standardTag: spec.standard_tag, - registrationIndex: spec.registration_index, - keyClass: spec.key_class, - userIdentityId: userIdentityId, - friendIdentityId: friendIdentityId - ) + for i in 0.. 0, let addressesPtr = addressesPtr { - for i in 0.. 0, let addressesPtr = pool.addresses_ptr { + for j in 0.. 0, + let labelsPtr = e.dpns_names, + let acquiredPtr = e.dpns_names_acquired_at { + dpnsNames.reserveCapacity(dpnsCount) + for j in 0..?, + upsertsPtr: UnsafePointer?, + upsertsCount: UInt, + removedSentPtr: UnsafePointer?, + removedSentCount: UInt, + removedIncomingPtr: UnsafePointer?, + removedIncomingCount: UInt +) -> Int32 { + guard let context = context, + let walletIdPtr = walletIdPtr else { + return 0 + } + let handler = Unmanaged + .fromOpaque(context) + .takeUnretainedValue() + let walletId = Data(bytes: walletIdPtr, count: 32) + + var upserts: [PlatformWalletPersistenceHandler.ContactRequestSnapshot] = [] + if upsertsCount > 0, let upsertsPtr = upsertsPtr { + upserts.reserveCapacity(Int(upsertsCount)) + for i in 0.. 0 { + encryptedPublicKey = Data(bytes: pkPtr, count: Int(e.encrypted_public_key_len)) + } else { + encryptedPublicKey = Data() + } + let encryptedAccountLabel: Data? + if let labelPtr = e.encrypted_account_label, e.encrypted_account_label_len > 0 { + encryptedAccountLabel = Data( + bytes: labelPtr, + count: Int(e.encrypted_account_label_len) + ) + } else { + encryptedAccountLabel = nil + } + let autoAcceptProof: Data? + if let proofPtr = e.auto_accept_proof, e.auto_accept_proof_len > 0 { + autoAcceptProof = Data(bytes: proofPtr, count: Int(e.auto_accept_proof_len)) + } else { + autoAcceptProof = nil + } + + upserts.append(.init( + ownerIdentityId: dataFromTuple32(e.owner_id), + contactIdentityId: dataFromTuple32(e.contact_id), + isOutgoing: e.is_outgoing, + senderKeyIndex: e.sender_key_index, + recipientKeyIndex: e.recipient_key_index, + accountReference: e.account_reference, + encryptedPublicKey: encryptedPublicKey, + encryptedAccountLabel: encryptedAccountLabel, + autoAcceptProof: autoAcceptProof, + coreHeightCreatedAt: e.core_height_created_at, + createdAtMillis: e.created_at + )) + } + } + + var removedSent: [PlatformWalletPersistenceHandler.ContactRequestRemovalSnapshot] = [] + if removedSentCount > 0, let removedSentPtr = removedSentPtr { + removedSent.reserveCapacity(Int(removedSentCount)) + for i in 0.. 0, let removedIncomingPtr = removedIncomingPtr { + removedIncoming.reserveCapacity(Int(removedIncomingCount)) + for i in 0.. PlatformWalletFFIResult in - platform_wallet_token_watch_and_sync( - handle, - bp.baseAddress, - UInt(bp.count), - &error - ) - } - guard result == PLATFORM_WALLET_FFI_RESULT_SUCCESS else { - throw PlatformWalletError(result: result, error: error) - } - }.value - } - - /// Build a 32-tuple of zeros. Field-by-field literals are the - /// only construction the imported C tuple type accepts. - fileprivate static func zeroByteTuple32() -> FFIByteTuple32 { - return ( - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0 - ) - } - - /// Copy a 32-byte `Identifier` payload into a `FFIByteTuple32` - /// without allocating an intermediate `Data` / array. - fileprivate static func copyIdentifier( - _ id: Identifier, - into dst: inout FFIByteTuple32 - ) { - let bytes = id.toFFIByteArray() - withUnsafeMutableBytes(of: &dst) { raw in - let typed = raw.bindMemory(to: UInt8.self) - for i in 0..` subscription across + /// `BalanceCardView` and the per-account rows. Without this + /// consolidation each consumer had its own subscription and the + /// persister stalled visibly during sync (3× SwiftData + /// change-tracking work per TXO insert). + let walletTxos: [PersistentTxo] @Query private var accounts: [PersistentAccount] - init(wallet: PersistentWallet) { + /// `address → [TXO]` index built once per render. Each account + /// row asks for its slice via `txos(for:)`; that's a constant- + /// time set-of-lookups against the index instead of a fresh + /// O(walletTxos) filter per account. Address-pool size is + /// gap-limit-bounded (~30-60 rows), so this stays cheap even + /// for wallets with thousands of TXOs. + private var txosByAddress: [String: [PersistentTxo]] { + Dictionary(grouping: walletTxos, by: \.address) + } + + init(wallet: PersistentWallet, walletTxos: [PersistentTxo]) { self.wallet = wallet + self.walletTxos = walletTxos let walletId = wallet.walletId _accounts = Query( filter: #Predicate { acc in @@ -18,6 +36,22 @@ struct AccountListView: View { ) } + /// TXOs belonging to a specific account, looked up from the + /// pre-built `txosByAddress` index by walking the account's + /// address pool. + private func txos( + for account: PersistentAccount, + index: [String: [PersistentTxo]] + ) -> [PersistentTxo] { + var collected: [PersistentTxo] = [] + for address in account.coreAddresses { + if let bucket = index[address.address] { + collected.append(contentsOf: bucket) + } + } + return collected + } + /// Stable display order — grouped by logical priority rather /// than by raw `accountType` tag so BIP44 leads, PlatformPayment /// sits next, BIP32 follows, CoinJoin after, and every special- @@ -61,9 +95,13 @@ struct AccountListView: View { description: Text("Accounts are created automatically when the wallet syncs.") ) } else { + let index = txosByAddress List(orderedAccounts) { account in NavigationLink(destination: AccountDetailView(wallet: wallet, account: account)) { - AccountRowView(account: account) + AccountRowView( + account: account, + accountTxos: txos(for: account, index: index) + ) } } .listStyle(.plain) @@ -75,6 +113,11 @@ struct AccountListView: View { // MARK: - Account Row View struct AccountRowView: View { let account: PersistentAccount + /// TXOs that the parent has identified as belonging to this + /// account. Pre-filtered upstream so the row doesn't have to + /// re-walk the wallet's TXO set per render. Empty for non- + /// Core-balance accounts (PlatformPayment / identity / etc.). + let accountTxos: [PersistentTxo] /// Friendly label for the account. Indexed account types get a /// trailing "#"; other types keep the bare name emitted by @@ -107,6 +150,25 @@ struct AccountRowView: View { account.accountType == 14 } + /// Per-account balance: partition the parent-supplied + /// `accountTxos` by `isSpent` × `isConfirmed`. + /// `PersistentAccount.balanceConfirmed` / `balanceUnconfirmed` + /// are persisted scalars but nothing currently writes them, so + /// we derive on read from the TXO set (the source of truth). + /// The walk happens upstream in `AccountListView.txos(for:)` — + /// this just filters the pre-narrowed slice. + private var coreConfirmedBalance: UInt64 { + accountTxos.lazy + .filter { !$0.isSpent && $0.isConfirmed } + .reduce(0) { $0 + $1.amount } + } + + private var coreUnconfirmedBalance: UInt64 { + accountTxos.lazy + .filter { !$0.isSpent && !$0.isConfirmed } + .reduce(0) { $0 + $1.amount } + } + private var iconName: String { switch account.accountType { case 0: @@ -164,17 +226,17 @@ struct AccountRowView: View { Text("Confirmed") .font(.caption) .foregroundColor(.secondary) - Text(formatBalance(account.balanceConfirmed)) + Text(formatBalance(coreConfirmedBalance)) .font(.subheadline) .fontWeight(.medium) } - if account.balanceUnconfirmed > 0 { + if coreUnconfirmedBalance > 0 { VStack(alignment: .leading, spacing: 2) { Text("Pending") .font(.caption) .foregroundColor(.secondary) - Text(formatBalance(account.balanceUnconfirmed)) + Text(formatBalance(coreUnconfirmedBalance)) .font(.subheadline) .fontWeight(.medium) .foregroundColor(.orange) @@ -187,7 +249,7 @@ struct AccountRowView: View { Text("Total") .font(.caption) .foregroundColor(.secondary) - Text(formatBalance(account.balanceConfirmed + account.balanceUnconfirmed)) + Text(formatBalance(coreConfirmedBalance + coreUnconfirmedBalance)) .font(.subheadline) .fontWeight(.semibold) .foregroundColor(iconColor) diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/IdentitiesContentView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/IdentitiesContentView.swift index e39674a1bbb..36164a25941 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/IdentitiesContentView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/IdentitiesContentView.swift @@ -134,7 +134,7 @@ struct IdentitiesContentView: View { Button { showingSearchWallets = true } label: { - Label("Search Wallets for Identities", systemImage: "magnifyingglass") + Label("Re-scan for Identities", systemImage: "magnifyingglass") } } label: { Image(systemName: "plus") diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/ReceiveAddressView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/ReceiveAddressView.swift index 0415c906cf9..1052065ec98 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/ReceiveAddressView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/ReceiveAddressView.swift @@ -34,7 +34,7 @@ struct ReceiveAddressView: View { /// Lowest-indexed unused external address on the primary BIP44 /// account. `PersistentCoreAddress` rows are populated by the Rust - /// `on_persist_account_addresses_fn` callback at wallet creation + /// `on_persist_account_address_pools_fn` callback at wallet creation /// (initial gap-limit fill), so they're available without a /// runtime FFI hop. private var nextCoreReceiveAddress: PersistentCoreAddress? { diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/WalletDetailView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/WalletDetailView.swift index 0ee9faa6774..cc091a83f45 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/WalletDetailView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/WalletDetailView.swift @@ -73,7 +73,7 @@ struct WalletDetailView: View { .padding(.top, 8) // Balance Card - BalanceCardView(wallet: wallet) + BalanceCardView(wallet: wallet, walletTxos: walletTxos) .padding() // Action Buttons @@ -151,7 +151,7 @@ struct WalletDetailView: View { .padding(.top) // Account List - AccountListView(wallet: wallet) + AccountListView(wallet: wallet, walletTxos: walletTxos) } .navigationTitle(wallet.label) .navigationBarTitleDisplayMode(.inline) @@ -598,8 +598,18 @@ struct BalanceCardView: View { /// distinguish "synced with zero balance" from "never synced". @Query private var syncStates: [PersistentPlatformAddressesSyncState] - init(wallet: PersistentWallet) { + /// Per-wallet TXO rows passed down from `WalletDetailView` so we + /// share a single `@Query` subscription across + /// every child view that needs the balance. Originally each of + /// `BalanceCardView` / `AccountListView` / `WalletDetailView` + /// had its own subscription; during sync the SwiftData + /// change-tracking ran 3× per TXO insert and the persister + /// stalled visibly. One subscription, one walk. + let walletTxos: [PersistentTxo] + + init(wallet: PersistentWallet, walletTxos: [PersistentTxo]) { self.wallet = wallet + self.walletTxos = walletTxos let walletId = wallet.walletId // `PersistentPlatformAddressesSyncState.network` is a required AppNetwork; // `.testnet` is a harmless sentinel for wallets that haven't @@ -616,12 +626,20 @@ struct BalanceCardView: View { ) } + /// Sum of unspent + confirmed TXO amounts. Walks the wallet-TXO + /// query result; one pass, one pred + add per row. private var confirmedBalance: UInt64 { - wallet.balanceConfirmed + walletTxos.lazy + .filter { !$0.isSpent && $0.isConfirmed } + .reduce(0) { $0 + $1.amount } } + /// Sum of unspent + unconfirmed (mempool / IS-locked-but-not-in-block) + /// TXO amounts. private var unconfirmedBalance: UInt64 { - wallet.balanceUnconfirmed + walletTxos.lazy + .filter { !$0.isSpent && !$0.isConfirmed } + .reduce(0) { $0 + $1.amount } } /// Platform balance from BLAST sync (preferred) or identity sum (fallback). diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/IdentityDetailView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/IdentityDetailView.swift index 5c633847bb4..1c9b614e3ed 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/IdentityDetailView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/IdentityDetailView.swift @@ -14,12 +14,25 @@ struct IdentityDetailView: View { /// alias edit), SwiftUI re-renders this view automatically. @Query private var identities: [PersistentIdentity] + /// Reactively observe the confirmed DPNS labels owned by this + /// identity. Filters by the denormalized `identityId` column on + /// `PersistentDPNSName` (not the optional relationship traversal + /// `identity?.identityId`, which SwiftData's predicate engine + /// chokes on for nullable relationships). Newest acquisition first + /// — the `acquiredAt` Unix-millis timestamp is `0` when unknown, + /// so legacy / un-timestamped rows naturally sort to the bottom. + @Query private var dpnsNamesRows: [PersistentDPNSName] + init(identityId: Data) { self.identityId = identityId let target = identityId _identities = Query( filter: #Predicate { $0.identityId == target } ) + _dpnsNamesRows = Query( + filter: PersistentDPNSName.predicate(identityId: target), + sort: [SortDescriptor(\PersistentDPNSName.acquiredAt, order: .reverse)] + ) } private var identity: PersistentIdentity? { @@ -40,9 +53,13 @@ struct IdentityDetailView: View { @State private var showingProfileEditor = false @State private var profileError: String? - /// DPNS names owned by this identity, fetched from the owning - /// wallet's `ManagedIdentity`. Empty until `loadDPNSNames` runs. - @State private var dpnsNames: [String] = [] + /// Bare-label projection of `dpnsNamesRows`. The list views in + /// this file deal in `[String]`, so this keeps the existing + /// rendering code shape after we switched the source of truth + /// from a plain `@State` array to a SwiftData `@Query`. + private var dpnsNames: [String] { + dpnsNamesRows.map(\.label) + } /// Labels this identity is currently contending for. @State private var contestedDpnsNames: [String] = [] /// Contest metadata keyed by name, surfaced to @@ -371,17 +388,16 @@ struct IdentityDetailView: View { EditAliasView(identity: identity, newAlias: $newAlias) } .sheet(isPresented: $showingRegisterName) { - RegisterNameView(identity: identity, onRegistered: { name in - // Append the just-registered name to the local @State - // list immediately so the section re-renders without - // waiting for the parent's `onAppear` re-fetch. - // De-dupe in case a stale `loadDPNSNames()` already - // landed it. - if !dpnsNames.contains(name) { - dpnsNames.append(name) - } - }) - .environmentObject(appState) + // The DPNS name list is now driven by `@Query` over + // `PersistentDPNSName`. The Rust-side + // `register_name_with_external_signer` path queues an + // `IdentityChangeSet` whose persister-callback hop + // upserts the new label row, which `dpnsNamesRows` + // observes — no manual @State poke needed. We still pass + // an `onRegistered` closure so `RegisterNameView` can + // honor its callback contract, but the body is a no-op. + RegisterNameView(identity: identity, onRegistered: { _ in }) + .environmentObject(appState) } .sheet(isPresented: $showingSelectMainName) { SelectMainNameView(identity: identity) @@ -584,18 +600,23 @@ struct IdentityDetailView: View { guard appState.sdk != nil else { return } - // Fetch regular and contested names sequentially to avoid sending non-Sendable results across tasks - let regular = await fetchRegularDPNSNames(identity: identity) + // Regular DPNS labels: kick a Rust-side + // `IdentityWallet::sync_dpns_names` so the persister callback + // receives a fresh `IdentityChangeSet` and upserts our + // `PersistentDPNSName` rows. The view's `@Query` over + // `dpnsNamesRows` picks the new rows up reactively — no + // assignment needed here. The returned tuple's labels are + // ignored on purpose; SwiftData is the source of truth. + _ = await fetchRegularDPNSNames(identity: identity) + + // Contested labels still flow through plain `@State` — + // they aren't part of the `PersistentDPNSName` collection + // (different lifecycle: in-flight contest churn vs. settled + // labels). The contested cache stays a per-view cache for + // now. let contested = await fetchContestedDPNSNames(identity: identity) await MainActor.run { - // Drive the local @State fields directly — they are the - // source of truth for this view's DPNS lists. The - // previous `appState.updateIdentityDPNSNames(...)` call - // wrote to the IdentityModel cache (which no longer - // exists post-migration) and was not bound back to this - // view's state, so nothing actually rendered from it. - self.dpnsNames = regular.0 self.contestedDpnsNames = contested.0 self.contestedDpnsInfo = contested.1 @@ -956,30 +977,31 @@ struct IdentityDetailView: View { } // Persist balances into `PersistentTokenBalance` via the - // platform-wallet token-watch + sync pipeline. We watch - // every (identity, token) pair this view cares about, - // sync, and let the Rust persister fire the + // manager-level identity-sync pipeline. We register the + // identity with this view's token list, kick a single + // sync pass, and let the Rust persister fire the // `on_persist_token_balances_fn` callback — the Swift // handler maps that onto SwiftData rows that the rest of // the app reads via @Query (recipient pickers, Burn / // Transfer / DestroyFrozen views). Failures here are // non-fatal: the display fetch below still surfaces the // numbers, and the next reload tries again. - if let walletId = identity.wallet?.walletId, - let wallet = walletManager.wallet(for: walletId) { - let identityBytes = identity.identityId - let pairs: [(identityId: Identifier, tokenId: Identifier)] = - idToToken.keys.compactMap { tokenIdBase58 in - guard let tokenIdBytes = Data.identifier(fromBase58: tokenIdBase58) else { - return nil - } - return (identityId: identityBytes, tokenId: tokenIdBytes) - } - do { - try await wallet.watchAndSyncTokenBalances(pairs: pairs) - } catch { - print("⚠️ token watch+sync failed: \(error)") - } + // + // (`registerIdentityForTokenSync` is idempotent — calling + // again with a different token list replaces the watched + // set; balances for tokens kept across the swap survive.) + let identityBytes = identity.identityId + let tokenIdData: [Identifier] = idToToken.keys.compactMap { tokenIdBase58 in + Data.identifier(fromBase58: tokenIdBase58) + } + do { + try walletManager.registerIdentityForTokenSync( + identityId: identityBytes, + tokenIds: tokenIdData + ) + try await walletManager.syncIdentityTokensNow() + } catch { + print("⚠️ identity token sync failed: \(error)") } do { diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/RegisterNameView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/RegisterNameView.swift index 039c5347195..ea9cb4ad338 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/RegisterNameView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/RegisterNameView.swift @@ -407,13 +407,23 @@ struct RegisterNameView: View { // `modelContext.container`. let signer = KeychainSigner(modelContainer: modelContext.container) + // The DPNS document stores `label` (display form, what the user + // typed) and `normalizedLabel` (homograph-safe lowercase, used + // for uniqueness lookup). The SDK side derives `normalizedLabel` + // from `label` via `convert_to_homograph_safe_chars`, so we hand + // it the raw trimmed input — passing `normalizedUsername` here + // makes both columns identical and the original casing / + // i-vs-1, o-vs-0 letters are lost from the cache and every + // subsequent display. + let displayLabel = username.trimmingCharacters(in: .whitespacesAndNewlines) + isRegistering = true Task { do { let registeredName = try await wallet.registerDpnsName( identityId: identity.identityId, - name: normalizedUsername, + name: displayLabel, signer: signer ) @@ -426,17 +436,18 @@ struct RegisterNameView: View { // can immediately add this name to its local state and // skip the "wait until I leave + come back" round-trip. // - // Pass the bare label (`normalizedUsername`), NOT the FFI's - // full-domain return value (`registeredName` = "name.dash"). - // The parent's `dpnsNames` array stores bare labels — that's - // what `managed.getDpnsNames()` returns — so passing the - // full domain here would render "trym0re2.dash" instead of - // "trym0re2" until the next `loadDPNSNames` round. - onRegistered?(normalizedUsername) + // Pass the bare display label (what we just registered), + // NOT the FFI's full-domain return value + // (`registeredName` = "name.dash"). The parent's + // `dpnsNames` array stores bare labels — that's what + // `managed.getDpnsNames()` returns — so passing the full + // domain here would render "label.dash" instead of "label" + // until the next `loadDPNSNames` round. + onRegistered?(displayLabel) registrationSuccess = true errorMessage = isContested ? - "Successfully started contest for \(normalizedUsername). Follow \(appState.currentNetwork == .mainnet ? "14 days" : "90 minutes") to resolution." : + "Successfully started contest for \(displayLabel). Follow \(appState.currentNetwork == .mainnet ? "14 days" : "90 minutes") to resolution." : "Successfully registered \(registeredName)!" showingError = true isRegistering = false diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/SearchWalletsForIdentitiesView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/SearchWalletsForIdentitiesView.swift index e49bd0fa281..bdf78340f81 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/SearchWalletsForIdentitiesView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/SearchWalletsForIdentitiesView.swift @@ -90,7 +90,7 @@ struct SearchWalletsForIdentitiesView: View { searchButtonSection } - .navigationTitle("Search Wallets") + .navigationTitle("Re-scan for Identities") .navigationBarTitleDisplayMode(.inline) .toolbar { ToolbarItem(placement: .navigationBarTrailing) { @@ -288,7 +288,7 @@ struct SearchWalletsForIdentitiesView: View { Text("Scanning…") } else { Image(systemName: "magnifyingglass") - Text(result == nil ? "Search Wallet" : "Search Again") + Text(result == nil ? "Re-scan Wallet" : "Re-scan Again") } Spacer() } diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageExplorerView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageExplorerView.swift index a2680ec8ba1..38341b6434f 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageExplorerView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageExplorerView.swift @@ -11,6 +11,27 @@ struct StorageExplorerView: View { modelRow("Identities", icon: "person.crop.circle", type: PersistentIdentity.self) { IdentityStorageListView() } + // Identity-relationship caches: cascade-owned by + // `PersistentIdentity`, surfaced as their own explorer + // sections so the row counts and per-row drill-downs + // are visible without going through the parent identity. + modelRow("DPNS Names", icon: "at", type: PersistentDPNSName.self) { + DPNSNameStorageListView() + } + modelRow( + "DashPay Profiles", + icon: "person.text.rectangle", + type: PersistentDashpayProfile.self + ) { + DashpayProfileStorageListView() + } + modelRow( + "Contact Requests", + icon: "person.crop.circle.badge.plus", + type: PersistentDashpayContactRequest.self + ) { + DashpayContactRequestStorageListView() + } modelRow("Documents", icon: "doc.text", type: PersistentDocument.self) { DocumentStorageListView() } @@ -132,6 +153,9 @@ struct StorageExplorerView: View { counts[key] = (try? modelContext.fetchCount(FetchDescriptor())) ?? 0 } count(PersistentIdentity.self) + count(PersistentDPNSName.self) + count(PersistentDashpayProfile.self) + count(PersistentDashpayContactRequest.self) count(PersistentDocument.self) count(PersistentDataContract.self) count(PersistentPublicKey.self) diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift index 8823bf7c7a5..032a5fc0dd6 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift @@ -307,6 +307,152 @@ struct PublicKeyStorageListView: View { } } +// MARK: - PersistentDPNSName + +/// Storage-explorer list of every confirmed DPNS label across all +/// identities. Newest acquisition first — `acquiredAt` is Unix-millis +/// from `DpnsNameInfo.acquired_at` and zero-valued rows (legacy, +/// un-timestamped) naturally fall to the bottom. +struct DPNSNameStorageListView: View { + @Query(sort: \PersistentDPNSName.acquiredAt, order: .reverse) + private var records: [PersistentDPNSName] + + var body: some View { + List(records) { record in + NavigationLink(destination: DPNSNameStorageDetailView(record: record)) { + VStack(alignment: .leading, spacing: 4) { + Text("\(record.label).\(record.parentDomainName)") + .font(.body).lineLimit(1) + Text(record.identity.identityIdBase58) + .font(.caption) + .foregroundColor(.secondary) + .lineLimit(1) + .truncationMode(.middle) + } + } + } + .navigationTitle("DPNS Names (\(records.count))") + .overlay { + if records.isEmpty { + ContentUnavailableView("No Records", systemImage: "at") + } + } + } +} + +// MARK: - PersistentDashpayProfile + +/// Storage-explorer list of every cached DashPay profile. One row +/// per (network, identity). Newest profile update first. +struct DashpayProfileStorageListView: View { + @Query(sort: \PersistentDashpayProfile.lastUpdated, order: .reverse) + private var records: [PersistentDashpayProfile] + + var body: some View { + List(records) { record in + NavigationLink(destination: DashpayProfileStorageDetailView(record: record)) { + VStack(alignment: .leading, spacing: 4) { + Text(record.displayName ?? "(no display name)") + .font(.body).lineLimit(1) + Text(record.identity.identityIdBase58) + .font(.caption) + .foregroundColor(.secondary) + .lineLimit(1) + .truncationMode(.middle) + } + } + } + .navigationTitle("DashPay Profiles (\(records.count))") + .overlay { + if records.isEmpty { + ContentUnavailableView("No Records", systemImage: "person.text.rectangle") + } + } + } +} + +// MARK: - PersistentDashpayContactRequest + +/// Storage-explorer list of every DashPay contact-request row. +/// Grouped by direction (Outgoing / Incoming) — `isOutgoing` partitions +/// the rows because the encrypted payload differs per direction (each +/// side seals to the other party's identity key), so the two +/// directions are inherently distinct rows even for the same +/// (owner, contact) pair. Within each section, newest request first +/// (`createdAtMillis` desc; `0` falls to the bottom). +struct DashpayContactRequestStorageListView: View { + @Query private var records: [PersistentDashpayContactRequest] + + private var outgoing: [PersistentDashpayContactRequest] { + records.filter { $0.isOutgoing } + .sorted { $0.createdAtMillis > $1.createdAtMillis } + } + + private var incoming: [PersistentDashpayContactRequest] { + records.filter { !$0.isOutgoing } + .sorted { $0.createdAtMillis > $1.createdAtMillis } + } + + var body: some View { + List { + if !outgoing.isEmpty { + Section("Outgoing (\(outgoing.count))") { + ForEach(outgoing) { record in + contactRequestLink(record) + } + } + } + if !incoming.isEmpty { + Section("Incoming (\(incoming.count))") { + ForEach(incoming) { record in + contactRequestLink(record) + } + } + } + } + .navigationTitle("Contact Requests (\(records.count))") + .overlay { + if records.isEmpty { + ContentUnavailableView( + "No Records", + systemImage: "person.crop.circle.badge.plus" + ) + } + } + } + + @ViewBuilder + private func contactRequestLink( + _ record: PersistentDashpayContactRequest + ) -> some View { + NavigationLink(destination: DashpayContactRequestStorageDetailView(record: record)) { + VStack(alignment: .leading, spacing: 4) { + Text(shortHex(record.contactIdentityId)) + .font(.system(.body, design: .monospaced)) + .lineLimit(1) + .truncationMode(.middle) + Text("from \(shortHex(record.ownerIdentityId))") + .font(.caption) + .foregroundColor(.secondary) + .lineLimit(1) + .truncationMode(.middle) + } + } + } + + /// Render a 32-byte identity id as a "" + /// to keep the row concise. Mirrors the truncation pattern other + /// storage list views use for ids. + private func shortHex(_ data: Data) -> String { + guard data.count >= 8 else { + return data.map { String(format: "%02x", $0) }.joined() + } + let head = data.prefix(4).map { String(format: "%02x", $0) }.joined() + let tail = data.suffix(4).map { String(format: "%02x", $0) }.joined() + return "\(head)…\(tail)" + } +} + // MARK: - PersistentToken struct TokenStorageListView: View { @@ -621,6 +767,11 @@ struct CoreAddressStorageListView: View { @Query(sort: [SortDescriptor(\PersistentCoreAddress.addressIndex)]) private var records: [PersistentCoreAddress] + /// Live-search query. Matches case-insensitively against the + /// Base58Check address, derivation path, and address index. + /// Empty string disables the filter. + @State private var searchText: String = "" + /// Composite key identifying one (wallet, account) bucket. All /// pools (External / Internal / Absent / Absent Hardened) for a /// given account collapse into a single section — the pool name @@ -646,12 +797,30 @@ struct CoreAddressStorageListView: View { } } + /// Records narrowed by `searchText`. Empty query passes + /// everything through. Match runs case-insensitively against the + /// address, derivation path, and stringified `addressIndex` so + /// the user can paste a Base58Check, type "44'/1'", or just + /// "/3" to find a specific row. Done before grouping so empty + /// sections drop out cleanly. + private var filteredRecords: [PersistentCoreAddress] { + let trimmed = searchText.trimmingCharacters(in: .whitespacesAndNewlines) + guard !trimmed.isEmpty else { return records } + let needle = trimmed.lowercased() + return records.filter { record in + if record.address.lowercased().contains(needle) { return true } + if record.derivationPath.lowercased().contains(needle) { return true } + if String(record.addressIndex).contains(needle) { return true } + return false + } + } + /// Group addresses by (wallet, account). Addresses within a group /// are sorted by (pool tag, derivation index) so external pool /// entries come first, followed by internal, followed by any /// absent-pool entries — each in index order. private var groups: [(GroupKey, [PersistentCoreAddress])] { - let grouped = Dictionary(grouping: records) { record -> GroupKey in + let grouped = Dictionary(grouping: filteredRecords) { record -> GroupKey in let account = record.account let wallet = account?.wallet return GroupKey( @@ -689,13 +858,16 @@ struct CoreAddressStorageListView: View { } } } - .navigationTitle("Core Addresses (\(records.count))") + .navigationTitle("Core Addresses (\(filteredRecords.count))") + .searchable(text: $searchText, prompt: "Search address, path, or index") .overlay { if records.isEmpty { ContentUnavailableView( "No Records", systemImage: "square.and.pencil" ) + } else if filteredRecords.isEmpty { + ContentUnavailableView.search(text: searchText) } } } diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift index 0697af865bb..f69218ef6be 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift @@ -97,6 +97,20 @@ struct IdentityStorageDetailView: View { FieldRow(label: "Public Keys", value: "\(record.publicKeys.count)") FieldRow(label: "Documents", value: "\(record.documents.count)") FieldRow(label: "Token Balances", value: "\(record.tokenBalances.count)") + FieldRow(label: "Owned Data Contracts", value: "\(record.ownedDataContracts.count)") + // DashPay / DPNS relationships added with the + // contact-request and profile changesets. Surface + // counts here; the dedicated storage-explorer + // sections own the per-row drill-down. + FieldRow(label: "DPNS Names", value: "\(record.dpnsNames.count)") + FieldRow( + label: "DashPay Profile", + value: record.dashpayProfile != nil ? "Present" : "None" + ) + FieldRow( + label: "Contact Requests", + value: "\(record.contactRequests.count)" + ) } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) @@ -109,6 +123,191 @@ struct IdentityStorageDetailView: View { } } +// MARK: - PersistentDPNSName + +/// Detail view for one cached DPNS-label row. Surfaces every stored +/// field plus a navigation link back to the owning identity so the +/// explorer can hop between the parent identity and its individual +/// labels. +struct DPNSNameStorageDetailView: View { + let record: PersistentDPNSName + + var body: some View { + Form { + Section("Core") { + FieldRow(label: "Label", value: record.label) + FieldRow(label: "Normalized Label", value: record.normalizedLabel) + FieldRow(label: "Parent Domain", value: record.parentDomainName) + FieldRow( + label: "Normalized Parent Domain", + value: record.normalizedParentDomainName + ) + FieldRow(label: "Network", value: record.network.displayName) + } + Section("Status") { + // `acquiredAt` is Unix-millis from + // `DpnsNameInfo.acquired_at`. Zero when the FFI + // changeset didn't carry a timestamp (legacy rows + // before the field was wired through). + FieldRow( + label: "Acquired At (ms)", + value: record.acquiredAt == 0 ? "—" : "\(record.acquiredAt)" + ) + if record.acquiredAt > 0 { + let date = Date( + timeIntervalSince1970: TimeInterval(record.acquiredAt) / 1000.0 + ) + FieldRow(label: "Acquired", value: dateString(date)) + } + } + Section("Relationships") { + NavigationLink(destination: IdentityStorageDetailView(record: record.identity)) { + FieldRow( + label: "Owner Identity", + value: record.identity.identityIdBase58 + ) + } + FieldRow(label: "Owner ID (Hex)", value: record.identity.identityIdString) + } + Section("Timestamps") { + FieldRow(label: "Created", value: dateString(record.createdAt)) + FieldRow(label: "Updated", value: dateString(record.lastUpdated)) + } + } + .navigationTitle("DPNS Name") + .navigationBarTitleDisplayMode(.inline) + } +} + +// MARK: - PersistentDashpayProfile + +/// Detail view for one cached DashPay profile row. Mirrors every +/// stored profile field; optional ones render as "—" when nil so +/// the field stays visible (rather than disappearing) and partial +/// profiles are obvious in the explorer. +struct DashpayProfileStorageDetailView: View { + let record: PersistentDashpayProfile + + var body: some View { + Form { + Section("Core") { + FieldRow(label: "Display Name", value: record.displayName ?? "—") + FieldRow(label: "Public Message", value: record.publicMessage ?? "—") + // `bio` is reserved on the row for forwards-compat + // with future DashPay contract revisions; v3 doesn't + // populate it. Surface anyway so the column isn't + // invisible if a later contract lights it up. + FieldRow(label: "Bio", value: record.bio ?? "—") + FieldRow(label: "Network", value: record.network.displayName) + } + Section("Avatar") { + FieldRow(label: "URL", value: record.avatarUrl ?? "—") + FieldRow( + label: "Hash (32 B)", + value: record.avatarHash.map { hexString($0) } ?? "—" + ) + FieldRow( + label: "Fingerprint (8 B)", + value: record.avatarFingerprint.map { hexString($0) } ?? "—" + ) + } + Section("Relationships") { + NavigationLink(destination: IdentityStorageDetailView(record: record.identity)) { + FieldRow( + label: "Owner Identity", + value: record.identity.identityIdBase58 + ) + } + FieldRow(label: "Owner ID (Hex)", value: record.identity.identityIdString) + } + Section("Timestamps") { + FieldRow(label: "Created", value: dateString(record.createdAt)) + FieldRow(label: "Updated", value: dateString(record.lastUpdated)) + } + } + .navigationTitle("DashPay Profile") + .navigationBarTitleDisplayMode(.inline) + } +} + +// MARK: - PersistentDashpayContactRequest + +/// Detail view for one DashPay contact-request row. Surfaces every +/// payload field plus the relationship pair (owner / contact). The +/// `ownerIdentityId` denorm shadow is presented in the relationships +/// section with a note — it's redundant with `owner.identityId` but +/// query-friendly. +struct DashpayContactRequestStorageDetailView: View { + let record: PersistentDashpayContactRequest + + var body: some View { + Form { + Section("Core") { + FieldRow( + label: "Direction", + value: record.isOutgoing ? "Outgoing" : "Incoming" + ) + FieldRow(label: "Network", value: record.network.displayName) + FieldRow(label: "Sender Key Index", value: "\(record.senderKeyIndex)") + FieldRow(label: "Recipient Key Index", value: "\(record.recipientKeyIndex)") + FieldRow(label: "Account Reference", value: "\(record.accountReference)") + FieldRow( + label: "Core Height Created At", + value: "\(record.coreHeightCreatedAt)" + ) + FieldRow( + label: "Created At (ms)", + value: record.createdAtMillis == 0 + ? "—" + : "\(record.createdAtMillis)" + ) + } + Section("Payload") { + FieldRow( + label: "Encrypted Public Key", + value: "\(record.encryptedPublicKey.count) bytes" + ) + FieldRow( + label: "Encrypted Account Label", + value: record.encryptedAccountLabel.map { "\($0.count) bytes" } ?? "—" + ) + FieldRow( + label: "Auto-Accept Proof", + value: record.autoAcceptProof.map { "\($0.count) bytes" } ?? "—" + ) + } + Section("Relationships") { + NavigationLink(destination: IdentityStorageDetailView(record: record.owner)) { + FieldRow( + label: "Owner Identity", + value: record.owner.identityIdBase58 + ) + } + FieldRow( + label: "Owner ID (Hex, denorm)", + value: hexString(record.ownerIdentityId) + ) + FieldRow( + label: "Contact ID (Hex)", + value: hexString(record.contactIdentityId) + ) + } + Section("Timestamps") { + if record.createdAtMillis > 0 { + let date = Date( + timeIntervalSince1970: TimeInterval(record.createdAtMillis) / 1000.0 + ) + FieldRow(label: "Document Created", value: dateString(date)) + } + FieldRow(label: "Row Created", value: dateString(record.createdAt)) + FieldRow(label: "Row Updated", value: dateString(record.lastUpdated)) + } + } + .navigationTitle(record.isOutgoing ? "Outgoing Contact Request" : "Incoming Contact Request") + .navigationBarTitleDisplayMode(.inline) + } +} + // MARK: - PersistentDocument struct DocumentStorageDetailView: View { @@ -119,19 +318,83 @@ struct DocumentStorageDetailView: View { Section("Core") { FieldRow(label: "Document ID", value: record.documentId) FieldRow(label: "Type", value: record.documentType) + FieldRow(label: "Display Title", value: record.displayTitle) FieldRow(label: "Revision", value: "\(record.revision)") FieldRow(label: "Contract ID", value: record.contractId) FieldRow(label: "Owner ID", value: record.ownerId) FieldRow(label: "Network", value: record.network.displayName) FieldRow(label: "Deleted", value: record.isDeleted ? "Yes" : "No") } - Section("Timestamps") { - FieldRow(label: "Created", value: dateString(record.localCreatedAt)) - FieldRow(label: "Updated", value: dateString(record.localUpdatedAt)) + Section("Block Heights") { + FieldRow( + label: "Created (Platform)", + value: record.createdAtBlockHeight.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Updated (Platform)", + value: record.updatedAtBlockHeight.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Transferred (Platform)", + value: record.transferredAtBlockHeight.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Created (Core)", + value: record.createdAtCoreBlockHeight.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Updated (Core)", + value: record.updatedAtCoreBlockHeight.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Transferred (Core)", + value: record.transferredAtCoreBlockHeight.map { "\($0)" } ?? "—" + ) + } + Section("Relationships") { + if let docType = record.documentType_relation { + NavigationLink(destination: DocumentTypeStorageDetailView(record: docType)) { + FieldRow(label: "Document Type", value: docType.name) + } + } else { + FieldRow(label: "Document Type", value: "Not linked") + } + if let contract = record.dataContract { + NavigationLink(destination: DataContractStorageDetailView(record: contract)) { + FieldRow(label: "Data Contract", value: contract.name) + } + } else { + FieldRow(label: "Data Contract", value: "Not linked") + } + // `ownerIdentity` is only populated for documents whose + // owner happens to also be a local identity. Most + // Platform-fetched documents will have nil here. + if let owner = record.ownerIdentity { + NavigationLink(destination: IdentityStorageDetailView(record: owner)) { + FieldRow(label: "Owner Identity", value: owner.identityIdBase58) + } + } else { + FieldRow(label: "Owner Identity", value: "Not local") + } } - if let json = jsonString(record.data) { - Section("Data") { - Text(json).font(.system(.caption, design: .monospaced)).textSelection(.enabled) + Section("Timestamps") { + // `createdAt` / `updatedAt` are the platform-side + // document timestamps; `localCreatedAt` / `localUpdatedAt` + // are when this row entered / changed in the local + // SwiftData store. They diverge whenever the row was + // back-filled or refreshed from a remote fetch. + FieldRow(label: "Created (Platform)", value: dateString(record.createdAt)) + FieldRow(label: "Updated (Platform)", value: dateString(record.updatedAt)) + FieldRow(label: "Transferred (Platform)", value: dateString(record.transferredAt)) + FieldRow(label: "Local Created", value: dateString(record.localCreatedAt)) + FieldRow(label: "Local Updated", value: dateString(record.localUpdatedAt)) + } + Section("Payload") { + FieldRow(label: "Data Size", value: "\(record.data.count) bytes") + if let json = jsonString(record.data) { + Text(json) + .font(.system(.caption, design: .monospaced)) + .textSelection(.enabled) } } } @@ -154,16 +417,52 @@ struct DataContractStorageDetailView: View { FieldRow(label: "Owner (Base58)", value: record.ownerIdBase58 ?? "None") FieldRow(label: "Network", value: record.network.displayName) FieldRow(label: "Has Tokens", value: record.hasTokens ? "Yes" : "No") + FieldRow(label: "Description", value: record.contractDescription ?? "—") + FieldRow( + label: "Schema Defs", + value: record.schemaDefs.map { "\($0)" } ?? "—" + ) } Section("Flags") { FieldRow(label: "Can Be Deleted", value: record.canBeDeleted ? "Yes" : "No") FieldRow(label: "Read Only", value: record.readonly ? "Yes" : "No") FieldRow(label: "Keeps History", value: record.keepsHistory ? "Yes" : "No") } + Section("Document Defaults") { + // Contract-level fallbacks applied to document types + // that don't override the corresponding flag. Surfaced + // separately from the contract-wide flags above + // because they govern docs, not the contract itself. + FieldRow( + label: "Docs Keep History", + value: record.documentsKeepHistoryContractDefault ? "Yes" : "No" + ) + FieldRow( + label: "Docs Mutable", + value: record.documentsMutableContractDefault ? "Yes" : "No" + ) + FieldRow( + label: "Docs Can Be Deleted", + value: record.documentsCanBeDeletedContractDefault ? "Yes" : "No" + ) + } + Section("Keywords") { + FieldRow(label: "Count", value: "\(record.keywordRelations.count)") + if !record.keywords.isEmpty { + FieldRow(label: "Values", value: record.keywords.joined(separator: ", ")) + } + } Section("Relationships") { FieldRow(label: "Document Types", value: "\(record.documentTypes?.count ?? 0)") FieldRow(label: "Tokens", value: "\(record.tokens?.count ?? 0)") FieldRow(label: "Documents", value: "\(record.documents.count)") + if let owner = record.ownerIdentity { + NavigationLink(destination: IdentityStorageDetailView(record: owner)) { + FieldRow(label: "Owner Identity", value: owner.identityIdBase58) + } + } else { + FieldRow(label: "Owner Identity", value: "Not local") + } } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) @@ -171,8 +470,28 @@ struct DataContractStorageDetailView: View { FieldRow(label: "Accessed", value: dateString(record.lastAccessedAt)) FieldRow(label: "Synced", value: dateString(record.lastSyncedAt)) } - Section("Serialized") { - FieldRow(label: "Contract Size", value: "\(record.serializedContract.count) bytes") + Section("Serialized Blobs") { + FieldRow( + label: "Contract (JSON)", + value: "\(record.serializedContract.count) bytes" + ) + FieldRow( + label: "Binary (CBOR)", + value: record.binarySerialization.map { "\($0.count) bytes" } ?? "—" + ) + FieldRow(label: "Schema Data", value: "\(record.schemaData.count) bytes") + FieldRow( + label: "Document Types Data", + value: "\(record.documentTypesData.count) bytes" + ) + FieldRow( + label: "Tokens Data", + value: record.tokensData.map { "\($0.count) bytes" } ?? "—" + ) + FieldRow( + label: "Groups Data", + value: record.groupsData.map { "\($0.count) bytes" } ?? "—" + ) } } .navigationTitle("Data Contract") @@ -189,15 +508,43 @@ struct PublicKeyStorageDetailView: View { Form { Section("Core") { FieldRow(label: "Key ID", value: "\(record.keyId)") - FieldRow(label: "Purpose", value: record.purpose) - FieldRow(label: "Security Level", value: record.securityLevel) - FieldRow(label: "Key Type", value: record.keyType) + // Stored as the raw `String(rawValue)`; project the + // human-readable name when the value parses to a known + // enum case so the row shows e.g. "Authentication (0)". + FieldRow(label: "Purpose", value: purposeDisplay) + FieldRow(label: "Security Level", value: securityLevelDisplay) + FieldRow(label: "Key Type", value: keyTypeDisplay) FieldRow(label: "Read Only", value: record.readOnly ? "Yes" : "No") FieldRow(label: "Disabled At", value: record.disabledAt.map { "\($0)" } ?? "No") + FieldRow(label: "Identity ID (Base58)", value: record.identityId) } Section("Data") { FieldRow(label: "Public Key", value: hexString(record.publicKeyData)) - FieldRow(label: "Private Key", value: record.hasPrivateKeyIdentifier ? "Present" : "Not set") + if let bounds = record.contractBounds, !bounds.isEmpty { + FieldRow(label: "Contract Bounds", value: "\(bounds.count)") + ForEach(Array(bounds.enumerated()), id: \.offset) { _, contractId in + FieldRow(label: "Contract", value: contractId.toBase58String()) + } + } else { + FieldRow(label: "Contract Bounds", value: "None") + } + // Surface the keychain identifier itself rather than a + // bare presence/absence flag — it's load-bearing for + // debugging the privkey<->pubkey wiring (the row links + // by string identifier, not foreign-key). + FieldRow( + label: "Private Key Keychain ID", + value: record.privateKeyKeychainIdentifier ?? "None" + ) + } + Section("Relationships") { + if let identity = record.identity { + NavigationLink(destination: IdentityStorageDetailView(record: identity)) { + FieldRow(label: "Identity", value: identity.identityIdBase58) + } + } else { + FieldRow(label: "Identity", value: "None") + } } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) @@ -207,10 +554,53 @@ struct PublicKeyStorageDetailView: View { .navigationTitle("Public Key") .navigationBarTitleDisplayMode(.inline) } + + private var purposeDisplay: String { + if let p = record.purposeEnum { return "\(p.name) (\(record.purpose))" } + return record.purpose + } + + private var securityLevelDisplay: String { + if let s = record.securityLevelEnum { return "\(s.name) (\(record.securityLevel))" } + return record.securityLevel + } + + private var keyTypeDisplay: String { + if let t = record.keyTypeEnum { return "\(t.name) (\(record.keyType))" } + return record.keyType + } } // MARK: - PersistentToken +/// Compact one-row summary of a `ChangeControlRules` value: shows the +/// authorized + admin role pair, with a trailing tag for any of the +/// three `*Allowed` toggles that flip away from their defaults. Used +/// across every change-rule slot on the token detail view so the +/// section stays scannable. +private struct ChangeControlRulesRow: View { + let label: String + let rules: ChangeControlRules? + + var body: some View { + if let rules = rules { + FieldRow(label: label, value: format(rules)) + } else { + FieldRow(label: label, value: "Not set") + } + } + + private func format(_ rules: ChangeControlRules) -> String { + var parts: [String] = [] + parts.append("auth=\(rules.authorizedToMakeChange)") + parts.append("admin=\(rules.adminActionTakers)") + if rules.changingAuthorizedActionTakersToNoOneAllowed { parts.append("auth→none") } + if rules.changingAdminActionTakersToNoOneAllowed { parts.append("admin→none") } + if rules.selfChangingAdminActionTakersAllowed { parts.append("self-admin") } + return parts.joined(separator: " · ") + } +} + struct TokenStorageDetailView: View { let record: PersistentToken @@ -220,12 +610,137 @@ struct TokenStorageDetailView: View { FieldRow(label: "ID", value: hexString(record.id)) FieldRow(label: "Contract (Base58)", value: record.contractIdBase58) FieldRow(label: "Name", value: record.name) + FieldRow(label: "Display Name", value: record.displayName) FieldRow(label: "Position", value: "\(record.position)") FieldRow(label: "Decimals", value: "\(record.decimals)") FieldRow(label: "Base Supply", value: record.formattedBaseSupply) + FieldRow(label: "Max Supply", value: record.maxSupply ?? "Unlimited") + FieldRow(label: "Description", value: record.tokenDescription ?? "—") + } + Section("Status") { FieldRow(label: "Paused", value: record.isPaused ? "Yes" : "No") + FieldRow( + label: "Allow Transfer to Frozen", + value: record.allowTransferToFrozenBalance ? "Yes" : "No" + ) + } + Section("Localization") { + let locs = record.localizations ?? [:] + FieldRow(label: "Languages", value: "\(locs.count)") + ForEach(locs.keys.sorted(), id: \.self) { lang in + if let loc = locs[lang] { + FieldRow( + label: lang, + value: "\(loc.singularForm) / \(loc.pluralForm)" + ) + } + } + } + Section("History Rules") { + FieldRow(label: "Transfers", value: record.keepsTransferHistory ? "Yes" : "No") + FieldRow(label: "Freezing", value: record.keepsFreezingHistory ? "Yes" : "No") + FieldRow(label: "Minting", value: record.keepsMintingHistory ? "Yes" : "No") + FieldRow(label: "Burning", value: record.keepsBurningHistory ? "Yes" : "No") + FieldRow( + label: "Direct Pricing", + value: record.keepsDirectPricingHistory ? "Yes" : "No" + ) + FieldRow( + label: "Direct Purchase", + value: record.keepsDirectPurchaseHistory ? "Yes" : "No" + ) + } + Section("Change Control Rules") { + ChangeControlRulesRow(label: "Conventions", rules: record.conventionsChangeRules) + ChangeControlRulesRow(label: "Max Supply", rules: record.maxSupplyChangeRules) + ChangeControlRulesRow(label: "Manual Mint", rules: record.manualMintingRules) + ChangeControlRulesRow(label: "Manual Burn", rules: record.manualBurningRules) + ChangeControlRulesRow(label: "Freeze", rules: record.freezeRules) + ChangeControlRulesRow(label: "Unfreeze", rules: record.unfreezeRules) + ChangeControlRulesRow( + label: "Destroy Frozen", + rules: record.destroyFrozenFundsRules + ) + ChangeControlRulesRow(label: "Emergency", rules: record.emergencyActionRules) + ChangeControlRulesRow(label: "Trade Mode", rules: record.tradeModeChangeRules) + } + Section("Distribution") { + // `perpetualDistribution` and `preProgrammedDistribution` + // are typed Codable structs — surface the headline + // fields per slot so a misconfigured token shows up + // here rather than vanishing behind a presence flag. + if let perp = record.perpetualDistribution { + FieldRow(label: "Perpetual", value: "Configured") + FieldRow(label: " Recipient", value: perp.distributionRecipient) + FieldRow(label: " Enabled", value: perp.enabled ? "Yes" : "No") + FieldRow(label: " Last", value: dateString(perp.lastDistributionTime)) + FieldRow(label: " Next", value: dateString(perp.nextDistributionTime)) + } else { + FieldRow(label: "Perpetual", value: "Not configured") + } + if let prog = record.preProgrammedDistribution { + FieldRow(label: "Pre-Programmed", value: "Configured") + FieldRow(label: " Schedule Events", value: "\(prog.distributionSchedule.count)") + FieldRow(label: " Current Index", value: "\(prog.currentEventIndex)") + FieldRow(label: " Total Distributed", value: prog.totalDistributed) + FieldRow(label: " Remaining", value: prog.remainingToDistribute) + FieldRow(label: " Active", value: prog.isActive ? "Yes" : "No") + FieldRow(label: " Paused", value: prog.isPaused ? "Yes" : "No") + FieldRow(label: " Completed", value: prog.isCompleted ? "Yes" : "No") + } else { + FieldRow(label: "Pre-Programmed", value: "Not configured") + } + FieldRow( + label: "Destination Identity", + value: record.newTokensDestinationIdentityBase58 ?? "Not set" + ) + FieldRow( + label: "Choose Mint Destination", + value: record.mintingAllowChoosingDestination ? "Yes" : "No" + ) + } + if let dcr = record.distributionChangeRules { + Section("Distribution Change Rules") { + ChangeControlRulesRow( + label: "Perpetual", + rules: dcr.perpetualDistributionRules + ) + ChangeControlRulesRow( + label: "Destination", + rules: dcr.newTokensDestinationIdentityRules + ) + ChangeControlRulesRow( + label: "Choose Destination", + rules: dcr.mintingAllowChoosingDestinationRules + ) + ChangeControlRulesRow( + label: "Direct Purchase Pricing", + rules: dcr.changeDirectPurchasePricingRules + ) + } + } + Section("Marketplace") { + FieldRow(label: "Trade Mode", value: record.tradeMode.displayName) + FieldRow(label: "Tradeable", value: record.isTradeable ? "Yes" : "No") + } + Section("Main Control Group") { + FieldRow( + label: "Position", + value: record.mainControlGroupPosition.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Can Be Modified", + value: record.mainControlGroupCanBeModified ?? "—" + ) } Section("Relationships") { + if let contract = record.dataContract { + NavigationLink(destination: DataContractStorageDetailView(record: contract)) { + FieldRow(label: "Data Contract", value: contract.name) + } + } else { + FieldRow(label: "Data Contract", value: "None") + } FieldRow(label: "Balances", value: "\(record.balances?.count ?? 0)") FieldRow(label: "History Events", value: "\(record.historyEvents?.count ?? 0)") } @@ -250,6 +765,7 @@ struct TokenBalanceStorageDetailView: View { FieldRow(label: "Token ID", value: record.tokenId) FieldRow(label: "Identity ID", value: hexString(record.identityId)) FieldRow(label: "Balance", value: "\(record.balance)") + FieldRow(label: "Display Balance", value: record.displayBalance) FieldRow(label: "Frozen", value: record.frozen ? "Yes" : "No") FieldRow(label: "Network", value: record.network.displayName) } @@ -258,6 +774,22 @@ struct TokenBalanceStorageDetailView: View { FieldRow(label: "Symbol", value: record.tokenSymbol ?? "None") FieldRow(label: "Decimals", value: record.tokenDecimals.map { "\($0)" } ?? "None") } + Section("Relationships") { + if let identity = record.identity { + NavigationLink(destination: IdentityStorageDetailView(record: identity)) { + FieldRow(label: "Identity", value: identity.identityIdBase58) + } + } else { + FieldRow(label: "Identity", value: "None") + } + if let token = record.token { + NavigationLink(destination: TokenStorageDetailView(record: token)) { + FieldRow(label: "Token", value: token.name) + } + } else { + FieldRow(label: "Token", value: "None") + } + } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) FieldRow(label: "Updated", value: dateString(record.lastUpdated)) @@ -277,19 +809,61 @@ struct TokenHistoryStorageDetailView: View { var body: some View { Form { Section("Core") { + FieldRow(label: "Event ID", value: record.id.uuidString) FieldRow(label: "Event Type", value: record.eventType) - FieldRow(label: "Transaction ID", value: record.transactionId.map { hexString($0) } ?? "None") - FieldRow(label: "Block Height", value: record.blockHeight.map { "\($0)" } ?? "None") - FieldRow(label: "Amount", value: record.amount.map { "\($0)" } ?? "None") + FieldRow(label: "Display", value: record.displayTitle) + FieldRow( + label: "Transaction ID", + value: record.transactionId.map { hexString($0) } ?? "None" + ) + FieldRow( + label: "Block Height", + value: record.blockHeight.map { "\($0)" } ?? "None" + ) + FieldRow( + label: "Core Block Height", + value: record.coreBlockHeight.map { "\($0)" } ?? "None" + ) + FieldRow(label: "Amount", value: record.amount ?? "None") + FieldRow(label: "Description", value: record.eventDescription ?? "—") } Section("Parties") { - FieldRow(label: "From", value: record.fromIdentity.map { hexString($0) } ?? "None") - FieldRow(label: "To", value: record.toIdentity.map { hexString($0) } ?? "None") + FieldRow( + label: "From", + value: record.fromIdentity.map { hexString($0) } ?? "None" + ) + FieldRow( + label: "To", + value: record.toIdentity.map { hexString($0) } ?? "None" + ) FieldRow(label: "Performed By", value: hexString(record.performedByIdentity)) } Section("Balance") { - FieldRow(label: "Before", value: record.balanceBefore.map { "\($0)" } ?? "None") - FieldRow(label: "After", value: record.balanceAfter.map { "\($0)" } ?? "None") + FieldRow(label: "Before", value: record.balanceBefore ?? "None") + FieldRow(label: "After", value: record.balanceAfter ?? "None") + } + // Optional event-type-specific payload (e.g. distribution + // recipient breakdown, emergency-action params). Render as + // pretty JSON when decodable; size only when not. + if let blob = record.additionalDataJSON { + Section("Additional Data") { + if let json = jsonString(blob) { + Text(json) + .font(.system(.caption, design: .monospaced)) + .textSelection(.enabled) + } else { + FieldRow(label: "Raw", value: "\(blob.count) bytes") + } + } + } + Section("Relationships") { + if let token = record.token { + NavigationLink(destination: TokenStorageDetailView(record: token)) { + FieldRow(label: "Token", value: token.name) + } + } else { + FieldRow(label: "Token", value: "None") + } } Section("Timestamps") { FieldRow(label: "Event", value: dateString(record.eventTimestamp)) @@ -311,15 +885,55 @@ struct DocumentTypeStorageDetailView: View { Section("Core") { FieldRow(label: "Name", value: record.name) FieldRow(label: "Contract (Base58)", value: record.contractIdBase58) + FieldRow(label: "Security Level", value: "\(record.securityLevel)") + FieldRow(label: "Trade Mode", value: "\(record.tradeMode)") + FieldRow( + label: "Creation Restriction Mode", + value: "\(record.creationRestrictionMode)" + ) } Section("Flags") { FieldRow(label: "Keeps History", value: record.documentsKeepHistory ? "Yes" : "No") FieldRow(label: "Mutable", value: record.documentsMutable ? "Yes" : "No") FieldRow(label: "Can Be Deleted", value: record.documentsCanBeDeleted ? "Yes" : "No") + FieldRow(label: "Transferable", value: record.documentsTransferable ? "Yes" : "No") + FieldRow( + label: "Requires Encryption Key", + value: record.requiresIdentityEncryptionBoundedKey ? "Yes" : "No" + ) + FieldRow( + label: "Requires Decryption Key", + value: record.requiresIdentityDecryptionBoundedKey ? "Yes" : "No" + ) + } + Section("Schema") { + // The schema and properties JSON blobs are stored as + // raw bytes; surface size first so an absent / tiny / + // huge schema is visible at a glance, then dump the + // pretty-printed payload below if it decodes. + FieldRow(label: "Schema Size", value: "\(record.schemaJSON.count) bytes") + FieldRow( + label: "Properties Size", + value: "\(record.propertiesJSON.count) bytes" + ) + if let req = record.requiredFieldsJSON { + FieldRow(label: "Required Fields Size", value: "\(req.count) bytes") + } + if let fields = record.requiredFields, !fields.isEmpty { + FieldRow(label: "Required Fields", value: fields.joined(separator: ", ")) + } } Section("Relationships") { + if let contract = record.dataContract { + NavigationLink(destination: DataContractStorageDetailView(record: contract)) { + FieldRow(label: "Data Contract", value: contract.name) + } + } else { + FieldRow(label: "Data Contract", value: "None") + } FieldRow(label: "Properties", value: "\(record.propertiesList?.count ?? 0)") FieldRow(label: "Indices", value: "\(record.indices?.count ?? 0)") + FieldRow(label: "Documents", value: "\(record.documentCount)") } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) @@ -341,6 +955,7 @@ struct IndexStorageDetailView: View { Section("Core") { FieldRow(label: "Name", value: record.name) FieldRow(label: "Document Type", value: record.documentTypeName) + FieldRow(label: "Contract ID (Hex)", value: hexString(record.contractId)) FieldRow(label: "Unique", value: record.unique ? "Yes" : "No") FieldRow(label: "Null Searchable", value: record.nullSearchable ? "Yes" : "No") FieldRow(label: "Contested", value: record.contested ? "Yes" : "No") @@ -352,6 +967,30 @@ struct IndexStorageDetailView: View { } } } + // `contestedDetailsJSON` is only populated when + // `contested == true`. Render the parsed payload + // pretty-printed; fall back to the raw size if the JSON + // bytes don't decode. + if let blob = record.contestedDetailsJSON { + Section("Contested Details") { + if let json = jsonString(blob) { + Text(json) + .font(.system(.caption, design: .monospaced)) + .textSelection(.enabled) + } else { + FieldRow(label: "Raw", value: "\(blob.count) bytes") + } + } + } + Section("Relationships") { + if let docType = record.documentType { + NavigationLink(destination: DocumentTypeStorageDetailView(record: docType)) { + FieldRow(label: "Document Type", value: docType.name) + } + } else { + FieldRow(label: "Document Type", value: "None") + } + } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) } @@ -372,13 +1011,49 @@ struct PropertyStorageDetailView: View { FieldRow(label: "Name", value: record.name) FieldRow(label: "Type", value: record.type) FieldRow(label: "Document Type", value: record.documentTypeName) + FieldRow(label: "Contract ID (Hex)", value: hexString(record.contractId)) FieldRow(label: "Required", value: record.isRequired ? "Yes" : "No") + FieldRow(label: "Transient", value: record.transient ? "Yes" : "No") + FieldRow(label: "Byte Array", value: record.byteArray ? "Yes" : "No") + FieldRow(label: "Description", value: record.fieldDescription ?? "—") } Section("Constraints") { - if let v = record.minLength { FieldRow(label: "Min Length", value: "\(v)") } - if let v = record.maxLength { FieldRow(label: "Max Length", value: "\(v)") } - if let v = record.pattern { FieldRow(label: "Pattern", value: v) } - if let v = record.format { FieldRow(label: "Format", value: v) } + FieldRow(label: "Format", value: record.format ?? "—") + FieldRow(label: "Content Media Type", value: record.contentMediaType ?? "—") + FieldRow(label: "Pattern", value: record.pattern ?? "—") + FieldRow( + label: "Min Length", + value: record.minLength.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Max Length", + value: record.maxLength.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Min Items", + value: record.minItems.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Max Items", + value: record.maxItems.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Min Value", + value: record.minValue.map { "\($0)" } ?? "—" + ) + FieldRow( + label: "Max Value", + value: record.maxValue.map { "\($0)" } ?? "—" + ) + } + Section("Relationships") { + if let docType = record.documentType { + NavigationLink(destination: DocumentTypeStorageDetailView(record: docType)) { + FieldRow(label: "Document Type", value: docType.name) + } + } else { + FieldRow(label: "Document Type", value: "None") + } } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) @@ -398,8 +1073,20 @@ struct KeywordStorageDetailView: View { Form { Section("Core") { FieldRow(label: "Keyword", value: record.keyword) + // `id` is the composite `"_"` row + // key. Surfaced for the storage explorer because it's + // load-bearing (uniqueness pivot) even though it's + // derived from the other two fields. + FieldRow(label: "Row ID", value: record.id) + FieldRow(label: "Contract ID (Base58)", value: record.contractId) + } + Section("Relationships") { if let contract = record.dataContract { - FieldRow(label: "Contract", value: contract.name) + NavigationLink(destination: DataContractStorageDetailView(record: contract)) { + FieldRow(label: "Data Contract", value: contract.name) + } + } else { + FieldRow(label: "Data Contract", value: "None") } } } @@ -421,6 +1108,14 @@ struct PlatformAddressesSyncStateStorageDetailView: View { var body: some View { Form { + Section("Scope") { + // `walletId` is the 32-byte unique scope key for this + // sync-state row. The current persistence layer writes a + // network-scoped key (one row per network) rather than a + // concrete wallet id, but the column name is preserved + // for schema compatibility — see model header doc. + FieldRow(label: "Scope Key (Hex)", value: hexString(record.walletId)) + } Section("Sync Watermark") { FieldRow(label: "Network", value: record.network.displayName) FieldRow(label: "Sync Height", value: "\(record.syncHeight)") @@ -492,6 +1187,15 @@ struct PlatformAddressDetailView: View { Section("Ownership") { FieldRow(label: "Wallet ID", value: hexString(record.walletId)) } + Section("Relationships") { + if let account = record.account { + NavigationLink(destination: AccountStorageDetailView(record: account)) { + FieldRow(label: "Account", value: account.accountTypeName) + } + } else { + FieldRow(label: "Account", value: "Not linked") + } + } Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) FieldRow(label: "Updated", value: dateString(record.lastUpdated)) @@ -507,6 +1211,15 @@ struct PlatformAddressDetailView: View { struct WalletStorageDetailView: View { let record: PersistentWallet + /// `lastSynced` is stored as Unix-seconds (`UInt64`). Render the + /// canonical date when non-zero so it matches the other + /// timestamp surfaces; "—" when the wallet has never synced. + private var lastSyncedDate: Date? { + record.lastSynced > 0 + ? Date(timeIntervalSince1970: TimeInterval(record.lastSynced)) + : nil + } + var body: some View { Form { Section("Core") { @@ -515,6 +1228,7 @@ struct WalletStorageDetailView: View { FieldRow(label: "Name", value: record.name ?? "None") FieldRow(label: "Birth Height", value: "\(record.birthHeight)") FieldRow(label: "Synced Height", value: "\(record.syncedHeight)") + FieldRow(label: "Imported", value: record.isImported ? "Yes" : "No") } Section("Balance") { FieldRow(label: "Confirmed", value: "\(record.balanceConfirmed)") @@ -533,6 +1247,7 @@ struct WalletStorageDetailView: View { Section("Timestamps") { FieldRow(label: "Created", value: dateString(record.createdAt)) FieldRow(label: "Updated", value: dateString(record.lastUpdated)) + FieldRow(label: "Last Synced", value: dateString(lastSyncedDate)) } } .navigationTitle("Wallet") @@ -546,13 +1261,13 @@ struct AccountStorageDetailView: View { let record: PersistentAccount /// Base58check-encoded xpub/tpub for this account, derived from - /// the stored ExtendedPubKey bytes. `nil` when the bytes are empty - /// (account created before the xpub-persistence path landed) or - /// decode fails. + /// the stored ExtendedPubKey bytes. `nil` when the bytes are + /// missing (account not yet hydrated) or decode fails. private var accountXpubString: String? { - PlatformWalletManager.accountExtendedPubKeyString( - bytes: record.accountExtendedPubKeyBytes - ) + guard let bytes = record.accountExtendedPubKeyBytes, !bytes.isEmpty else { + return nil + } + return PlatformWalletManager.accountExtendedPubKeyString(bytes: bytes) } /// Distinct transactions this account participates in: union of @@ -591,6 +1306,36 @@ struct AccountStorageDetailView: View { value: accountXpubString ?? "—" ) } + Section("Variant Disambiguators") { + // Account-identity disambiguators carried on every + // row: `standardTag` distinguishes BIP44 (0) from + // BIP32 (1) for Standard accounts; `registrationIndex` + // is the IdentityTopUp registration index; + // `keyClass` is the PlatformPayment key class; + // `userIdentityId` / `friendIdentityId` populate + // for the DashPay account variants. Only the + // disambiguators meaningful for the current + // `accountType` are populated; others are zero or + // empty by construction. + FieldRow(label: "Standard Tag", value: "\(record.standardTag)") + FieldRow( + label: "Registration Index", + value: "\(record.registrationIndex)" + ) + FieldRow(label: "Key Class", value: "\(record.keyClass)") + FieldRow( + label: "User Identity ID", + value: record.userIdentityId.isEmpty + ? "—" + : hexString(record.userIdentityId) + ) + FieldRow( + label: "Friend Identity ID", + value: record.friendIdentityId.isEmpty + ? "—" + : hexString(record.friendIdentityId) + ) + } Section("Balance") { FieldRow(label: "Confirmed", value: "\(record.balanceConfirmed)") FieldRow(label: "Unconfirmed", value: "\(record.balanceUnconfirmed)") @@ -606,7 +1351,11 @@ struct AccountStorageDetailView: View { // account-scoped, so this has to be derived in Swift. FieldRow(label: "Transactions", value: "\(distinctTransactionCount)") FieldRow(label: "TXOs", value: "\(txoCount)") - FieldRow(label: "Addresses", value: "\(record.coreAddresses.count)") + FieldRow(label: "Core Addresses", value: "\(record.coreAddresses.count)") + FieldRow( + label: "Platform Addresses", + value: "\(record.platformAddresses.count)" + ) FieldRow(label: "Wallet", value: record.wallet.name ?? hexString(record.wallet.walletId)) } ForEach(addressSections(), id: \.0) { poolName, addresses in @@ -769,7 +1518,6 @@ struct TxoStorageDetailView: View { FieldRow(label: "TXID", value: record.txidHex) FieldRow(label: "Vout", value: "\(record.vout)") FieldRow(label: "Amount", value: record.formattedAmount) - FieldRow(label: "Address", value: record.address) } Section("Status") { FieldRow(label: "Height", value: "\(record.height)") @@ -780,6 +1528,19 @@ struct TxoStorageDetailView: View { FieldRow(label: "Spent", value: record.isSpent ? "Yes" : "No") } Section("Relationships") { + // Address: tappable when the `coreAddress` link + // exists (navigates to the address detail), plain + // text fallback otherwise. The Base58Check string + // is the authoritative identifier in either case; + // the link just makes the address-row drill-down + // one tap away when we have it. + if let coreAddress = record.coreAddress { + NavigationLink(destination: CoreAddressDetailView(record: coreAddress)) { + FieldRow(label: "Address", value: record.address) + } + } else { + FieldRow(label: "Address", value: record.address) + } // Prefer the canonical `coreAddress.account` path; // fall back to the one-way `account` field for TXOs // whose address row hasn't been linked yet. @@ -787,10 +1548,6 @@ struct TxoStorageDetailView: View { label: "Account", value: (record.coreAddress?.account ?? record.account)?.accountTypeName ?? "—" ) - FieldRow( - label: "Address Row", - value: record.coreAddress?.address ?? "—" - ) FieldRow(label: "Wallet ID", value: record.walletId.isEmpty ? "—" : hexString(record.walletId)) FieldRow( label: "Created By", diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/WalletMemoryExplorerView.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/WalletMemoryExplorerView.swift index 27a37a6c210..d25f53f9c0b 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/WalletMemoryExplorerView.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/WalletMemoryExplorerView.swift @@ -130,8 +130,7 @@ struct WalletMemoryExplorerView: View { watchedCount: 0, lastScannedIndex: 0, primaryIdentityId: nil, - trackedAssetLocksCount: 0, - tokenBalancesCount: 0 + trackedAssetLocksCount: 0 ) VStack(alignment: .leading, spacing: 4) { Text(walletDisplayLabel(walletId, fromPersistent: nil)) @@ -178,10 +177,6 @@ struct WalletMemoryDetailView: View { label: "Tracked Asset Locks", value: "\(summary.trackedAssetLocksCount)" ) - KVRow( - label: "Token Balances", - value: "\(summary.tokenBalancesCount)" - ) if let primary = summary.primaryIdentityId { KVRow(label: "Primary Identity", value: shortBase58(primary)) Text(fullBase58(primary)) From 3cbc599460747af297cbe656fa49937c210fab08 Mon Sep 17 00:00:00 2001 From: Quantum Explorer Date: Thu, 30 Apr 2026 04:47:01 +0800 Subject: [PATCH 2/2] fix(platform-wallet,swift-sdk): address CodeRabbit review findings MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - persistence.rs: return Err when round_success is false instead of merging the changeset into pending and calling on_store_fn after a rolled-back round. - identity_sync.rs: fix stop()/start() race where the old thread's cleanup unconditionally cleared background_cancel, orphaning a newly started loop. Uses a generation counter so cleanup only clears the slot when no newer start() has fired. - identity_sync.rs: rebuild the per-identity token cache from the live existing_row.tokens instead of the stale token_ids snapshot captured before network calls, so concurrent update_watched_tokens / unregister_identity changes aren't lost. - mod.rs: PlatformWalletManager::shutdown() now stops both periodic coordinators (PlatformAddressSyncManager, IdentitySyncManager) before cancelling the wallet-event adapter. - PersistenceHandler.swift: add inChangeset flag so per-kind helpers (persistWalletMetadata, persistAccount, persistAccountAddresses, persistPlatformPaymentAddresses) skip their own save() inside a begin/end changeset bracket, letting endChangeset commit or rollback the whole round atomically. - PersistentTxo.swift: fix comment (.nullify → .cascade) to match PersistentCoreAddress.txos relationship. - StorageRecordDetailViews.swift: fix schema section comment that claimed it dumps pretty-printed JSON when it only shows sizes. Co-Authored-By: Claude Opus 4.7 (1M context) --- .../rs-platform-wallet-ffi/src/persistence.rs | 8 +++ .../src/manager/identity_sync.rs | 57 +++++++++---------- .../rs-platform-wallet/src/manager/mod.rs | 14 +++-- .../Persistence/Models/PersistentTxo.swift | 4 +- .../PlatformWalletPersistenceHandler.swift | 18 ++++-- .../Views/StorageRecordDetailViews.swift | 4 +- 6 files changed, 59 insertions(+), 46 deletions(-) diff --git a/packages/rs-platform-wallet-ffi/src/persistence.rs b/packages/rs-platform-wallet-ffi/src/persistence.rs index a1b5204ffe7..eff99d2ff3a 100644 --- a/packages/rs-platform-wallet-ffi/src/persistence.rs +++ b/packages/rs-platform-wallet-ffi/src/persistence.rs @@ -763,6 +763,14 @@ impl PlatformWalletPersistence for FFIPersister { } } + if !round_success { + return Err( + "one or more persistence callbacks failed; changeset was rolled back" + .to_string() + .into(), + ); + } + // Merge into pending changesets. let mut pending = self.pending.write(); pending diff --git a/packages/rs-platform-wallet/src/manager/identity_sync.rs b/packages/rs-platform-wallet/src/manager/identity_sync.rs index 566372ab2b2..29d3b8f92e2 100644 --- a/packages/rs-platform-wallet/src/manager/identity_sync.rs +++ b/packages/rs-platform-wallet/src/manager/identity_sync.rs @@ -160,6 +160,10 @@ where persister: Arc

, /// Cancel token for the background loop, if running. background_cancel: StdMutex>, + /// Monotonically increasing generation counter. Incremented each + /// time `start()` installs a new cancel token so the exiting + /// thread can tell whether its token is still current. + background_generation: AtomicU64, interval_secs: AtomicU64, is_syncing: AtomicBool, /// Unix seconds of the last completed pass across all identities. @@ -193,6 +197,7 @@ where sdk, persister, background_cancel: StdMutex::new(None), + background_generation: AtomicU64::new(0), interval_secs: AtomicU64::new(DEFAULT_SYNC_INTERVAL_SECS), is_syncing: AtomicBool::new(false), last_sync_unix: AtomicU64::new(0), @@ -379,6 +384,7 @@ where } let cancel = CancellationToken::new(); *guard = Some(cancel.clone()); + let my_gen = self.background_generation.fetch_add(1, Ordering::AcqRel) + 1; drop(guard); let handle = tokio::runtime::Handle::current(); @@ -401,8 +407,12 @@ where } } + // Only clear the slot if no newer start() has + // installed a replacement token since we launched. if let Ok(mut guard) = this.background_cancel.lock() { - *guard = None; + if this.background_generation.load(Ordering::Acquire) == my_gen { + *guard = None; + } } }); }) @@ -573,33 +583,21 @@ where // intersected with what Platform reports. let mut state = self.state.write().await; if let Some(existing_row) = state.get(&identity_id).cloned() { - // Map each currently-watched token to its new info: keep - // the old contract / nonce placeholders, swap in the - // fresh balance if we got one, drop the row entirely if - // Platform removed it. - let prior_by_id: BTreeMap = existing_row - .tokens - .iter() - .map(|info| (info.token_id, *info)) - .collect(); - - let mut new_tokens: Vec = Vec::with_capacity(token_ids.len()); - for token_id in token_ids { - match fresh_balances.get(token_id) { + // Rebuild from the *live* row (which may have been mutated + // by concurrent `update_watched_tokens` / `unregister_identity` + // while our network calls were in flight) rather than the + // stale `token_ids` snapshot. This way mid-sync registry + // changes are preserved: newly added tokens keep their + // initial state, and tokens removed during the pass stay + // removed. + let mut new_tokens: Vec = + Vec::with_capacity(existing_row.tokens.len()); + for prior in &existing_row.tokens { + match fresh_balances.get(&prior.token_id) { Some(Some(amount)) => { - let prior = - prior_by_id - .get(token_id) - .copied() - .unwrap_or(IdentityTokenSyncInfo { - token_id: *token_id, - contract_id: Identifier::default(), - balance: 0, - identity_contract_nonce: 0, - }); new_tokens.push(IdentityTokenSyncInfo { balance: *amount, - ..prior + ..*prior }); } Some(None) => { @@ -607,12 +605,9 @@ where // this identity — drop the row. } None => { - // Batch failed for this token — keep the - // prior row to avoid clobbering on transient - // errors. - if let Some(prior) = prior_by_id.get(token_id).copied() { - new_tokens.push(prior); - } + // Batch didn't cover this token (added mid- + // sync, or batch failed) — keep prior state. + new_tokens.push(*prior); } } } diff --git a/packages/rs-platform-wallet/src/manager/mod.rs b/packages/rs-platform-wallet/src/manager/mod.rs index 58e2f046661..3a929943889 100644 --- a/packages/rs-platform-wallet/src/manager/mod.rs +++ b/packages/rs-platform-wallet/src/manager/mod.rs @@ -121,13 +121,17 @@ impl PlatformWalletManager

{ } } - /// Stop the wallet-event adapter task and wait for it to exit. + /// Stop all background tasks and wait for them to exit. /// - /// Idempotent. After this returns, no further `WalletEvent`s will - /// be projected to the persister. Call before dropping the manager - /// when a clean shutdown is required (e.g. on app termination); a - /// dirty drop simply leaks the task until the runtime exits. + /// Stops the periodic coordinators (`PlatformAddressSyncManager`, + /// `IdentitySyncManager`) and the wallet-event adapter task. + /// Idempotent. Call before dropping the manager when a clean + /// shutdown is required (e.g. on app termination); a dirty drop + /// simply leaks the tasks until the runtime exits. pub async fn shutdown(&self) { + self.platform_address_sync_manager.stop(); + self.identity_sync_manager.stop(); + self.event_adapter_cancel.cancel(); if let Some(handle) = self.event_adapter_join.lock().await.take() { if let Err(e) = handle.await { diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentTxo.swift b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentTxo.swift index 0aaeb57014d..85c2ed20b54 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentTxo.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentTxo.swift @@ -92,8 +92,8 @@ public final class PersistentTxo { /// outgoing recipient). The relationship is the convenient /// pointer for navigating to derivation metadata, balance, and /// pool tag without a separate fetch. Inverse of - /// `PersistentCoreAddress.txos`; `.nullify` on that side so - /// pool rebuilds don't cascade-delete TXOs. + /// `PersistentCoreAddress.txos`; `.cascade` on that side so + /// account / wallet teardown drops TXOs cleanly. public var coreAddress: PersistentCoreAddress? public init( diff --git a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift index 96d984e682b..34312ba877e 100644 --- a/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift +++ b/packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift @@ -31,6 +31,12 @@ public class PlatformWalletPersistenceHandler { qos: .userInitiated ) + /// True while inside a begin/end changeset bracket. When set, + /// per-kind helpers skip their own `backgroundContext.save()` and + /// let `endChangeset` commit (or rollback) the whole round + /// atomically. + private var inChangeset = false + public init(modelContainer: ModelContainer) { self.modelContainer = modelContainer self.backgroundContext = ModelContext(modelContainer) @@ -611,7 +617,8 @@ public class PlatformWalletPersistenceHandler { /// instrumented timing, etc.) has an obvious seam. func beginChangeset(walletId: Data) { onQueue { - _ = walletId // reserved for future wallet-scope batching + _ = walletId + self.inChangeset = true } } @@ -628,6 +635,7 @@ public class PlatformWalletPersistenceHandler { func endChangeset(walletId: Data, success: Bool) { onQueue { _ = walletId + defer { self.inChangeset = false } if success { do { try backgroundContext.save() @@ -1664,7 +1672,7 @@ public class PlatformWalletPersistenceHandler { } } - try? backgroundContext.save() + if !self.inChangeset { try? backgroundContext.save() } } // onQueue } @@ -1732,7 +1740,7 @@ public class PlatformWalletPersistenceHandler { row.lastUpdated = Date() } - try? backgroundContext.save() + if !self.inChangeset { try? backgroundContext.save() } } /// Split a DIP-0018 bech32m platform address back into @@ -1823,7 +1831,7 @@ public class PlatformWalletPersistenceHandler { wallet.network = appNetwork(for: networkTag) wallet.birthHeight = birthHeight wallet.lastUpdated = Date() - try? backgroundContext.save() + if !self.inChangeset { try? backgroundContext.save() } } } @@ -1934,7 +1942,7 @@ public class PlatformWalletPersistenceHandler { account.friendIdentityId = friendIdentityId account.accountExtendedPubKeyBytes = xpubBytes account.lastUpdated = Date() - try? backgroundContext.save() + if !self.inChangeset { try? backgroundContext.save() } } // onQueue } diff --git a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift index f69218ef6be..5198f8a253e 100644 --- a/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift +++ b/packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageRecordDetailViews.swift @@ -908,9 +908,7 @@ struct DocumentTypeStorageDetailView: View { } Section("Schema") { // The schema and properties JSON blobs are stored as - // raw bytes; surface size first so an absent / tiny / - // huge schema is visible at a glance, then dump the - // pretty-printed payload below if it decodes. + // raw bytes; surface sizes and required fields. FieldRow(label: "Schema Size", value: "\(record.schemaJSON.count) bytes") FieldRow( label: "Properties Size",