Conversation
Implements Phase 1 of WebDAV synchronization feature: - Add dependencies: WorkManager, OkHttp, security-crypto - Add network permissions (INTERNET, ACCESS_NETWORK_STATE) - Create SyncSettings data class with sync configuration - Implement CredentialManager for encrypted credential storage - Implement WebDAVClient with full WebDAV operations - Basic authentication support - PROPFIND, PUT, GET, DELETE, MKCOL methods - Directory creation and file streaming support
Implements Phase 2 of WebDAV synchronization: - FolderSerializer: Convert folder hierarchy to/from folders.json - NotebookSerializer: Convert notebooks/pages/strokes/images to/from JSON - Handles manifest.json for notebook metadata - Handles per-page JSON with all strokes and images - Converts absolute URIs to relative paths for WebDAV storage - Supports ISO 8601 timestamps for conflict resolution Phase 2 complete. Next: SyncEngine for orchestrating sync operations.
Creates skeleton implementations for remaining sync components: Core Sync Components: - SyncEngine: Core orchestrator with stub methods for sync operations - ConnectivityChecker: Network state monitoring (complete) - SyncWorker: Background periodic sync via WorkManager - SyncScheduler: Helper to enable/disable periodic sync UI Integration: - Add "Sync" tab to Settings UI - Stub SyncSettings composable with basic toggle All components compile and have proper structure. Ready to fill in implementation details incrementally. TODOs mark where logic needs to be added.
- Fix KvProxy import path (com.ethran.notable.data.db.KvProxy) - Replace HTTP_METHOD_NOT_ALLOWED with constant 405 - Correct package imports in SyncEngine
Add full-featured sync settings interface with: - Server URL, username, password input fields - Test Connection button with success/failure feedback - Enable/disable sync toggle - Auto-sync toggle (enables/disables WorkManager) - Sync on note close toggle - Manual "Sync Now" button - Last sync timestamp display - Encrypted credential storage via CredentialManager - Proper styling matching app's design patterns All settings are functional and persist correctly. UI is ready for actual sync implementation.
showHint takes (text, scope) not (context, text)
Log URL and credentials being used, response codes, and errors to help diagnose connection issues
Use Dispatchers.IO for network calls (Test Connection, Sync Now). Switch back to Dispatchers.Main for UI updates using withContext. Fixes: NetworkOnMainThreadException when testing WebDAV connection
Core sync implementation: - syncAllNotebooks(): Orchestrates full sync of folders + notebooks - syncFolders(): Bidirectional folder hierarchy sync with merge - syncNotebook(): Per-notebook sync with last-write-wins conflict resolution - uploadNotebook/uploadPage(): Upload notebook data and files to WebDAV - downloadNotebook/downloadPage(): Download notebook data and files from WebDAV - Image and background file handling (upload/download) Database enhancements: - Add getAll() to FolderDao/FolderRepository - Add getAll() to NotebookDao/BookRepository Sync features: - Timestamp-based conflict resolution (last-write-wins) - Full page overwrite on conflict (no partial merge) - Image file sync with local path resolution - Custom background sync (skips native templates) - Comprehensive error handling and logging - Resilient to partial failures (continues if one notebook fails) Quick Pages sync still TODO (marked in code).
- Remove context parameter from ensureBackgroundsFolder/ensureImagesFolder - Fix image URI updating (create new Image objects instead of reassigning val) - Use updatedImages when saving to database - Handle nullable URI checks properly
Safety Features for Initial Sync Setup: - forceUploadAll(): Delete server data, upload all local notebooks/folders - forceDownloadAll(): Delete local data, download all from server UI: - "Replace Server with Local Data" button (orange warning) - "Replace Local with Server Data" button (red warning) - Confirmation dialogs with clear warnings - Prevents accidental data loss on fresh device sync Use cases: - Setting up sync for first time - Adding new device to existing sync - Recovering from sync conflicts - Resetting sync environment
Log notebook discovery, download attempts, and directory listings to diagnose sync issues
Features: - SyncLogger: Maintains last 50 sync log entries in memory - Live log display in Settings UI (last 20 entries) - Color-coded: green (info), orange (warning), red (error) - Auto-scrolls to bottom as new logs arrive - Clear button to reset logs - Monospace font for readability Makes debugging sync issues much easier for end users without needing to check Logcat.
Fixes: - forceUploadAll: Delete only notebook directories, not entire /Notable folder - Add detailed SyncLogger calls throughout force operations - Add logging to upload/download operations with notebook titles Log viewer now shows: - Exactly which notebooks are being uploaded/downloaded - Success/failure for each notebook - Number of pages per notebook - Any errors encountered This makes debugging sync issues much easier and prevents accidentally wiping the entire sync directory.
Add auto-sync trigger when switching pages in editor: - Hook into EditorControlTower.switchPage() - Pass context to EditorControlTower constructor - Trigger SyncEngine.syncNotebook() when leaving a page - Only syncs if enabled in settings - Runs in background on IO dispatcher - Logs to SyncLogger for visibility Now sync-on-close setting is functional.
Show clearly in sync log: - ↑ Uploading (local newer or doesn't exist on server) - ↓ Downloading (remote newer) - Timestamp comparison for each decision - Which path is taken for each notebook This will help diagnose why sync only goes up and never down.
Create AppRepository instance to properly access PageRepository in triggerSyncForPage method.
Previous logic: equal timestamps → upload New logic: equal timestamps → skip (no changes needed) Now properly handles three cases: - Remote newer → download - Local newer → upload - Equal → skip This prevents unnecessary re-uploads when nothing has changed.
Move sync trigger to EditorView's DisposableEffect.onDispose which fires when navigating away from the editor. Now sync-on-close actually works when you: - Navigate back to library - Switch to a different notebook - Exit the app Will show "Auto-syncing on editor close" in sync log.
Use new CoroutineScope instead of composition scope in onDispose. The composition scope gets cancelled during disposal, causing "rememberCoroutineScope left the composition" error. Now sync-on-close will actually complete.
Log when credentials are loaded or missing to help diagnose AUTH_ERROR issues.
Show full Date.time millisecond values and difference to diagnose why timestamps that appear equal are being treated as different. This should reveal if there are sub-second differences causing unnecessary uploads.
Add null-safe access to remoteUpdatedAt.time
Problem: ISO 8601 serialization loses milliseconds, causing local timestamps to always be slightly newer (100-500ms). Solution: Ignore differences < 1 second (1000ms) - Difference < -1000ms: remote newer → download - Difference > 1000ms: local newer → upload - Within ±1 second: no significant change → skip This prevents unnecessary re-uploads of unchanged notebooks while still detecting real changes.
After syncing local notebooks, now scans server for notebooks that don't exist locally and downloads them. Flow: 1. Sync folders 2. Sync all local notebooks (upload/download/skip) 3. Discover new notebooks on server 4. Download any that don't exist locally This enables proper bidirectional sync - devices can now discover and download notebooks created on other devices.
|
please add switch to explicitly allow syncing on mobile data. It shouldn't be syncing by default when it's not connected to wifi. |
|
Also, please provide instructions how to setup syncing if user have 2fa, ie: |
|
ok, great! I will check it at the evening or tomorrow and let you know if everything is ok. Also I started major refactoring of codebase, mostly to separate UI from other logic, and so that @Preview annotation will work. I will probably create PR in next few days, it shouldn't effect this too much. I finally discovered why it was so painful to work on UI in notable. |
this is a very good idea imo. will make maintenance much easier. |
|
@jdkruzr I looked at the docs, and have some questions, and remarks:
|
devices sync on notebook open/close, not continuously like e.g. Dropbox. since this is a single user system it's extremely unlikely you run into this condition, and it's hard to address without transaction support that you would get from something like server extensions -- and that would defeat the purpose of designing this to be usable with any bog-standard WebDAV server. you could probably make this happen if you actively tried, but you would need to have two devices open at the same time and then close or delete a note at precisely the same time -- hard to even do on purpose, nevermind accidentally!
a race condition here is still unlikely but distantly possible -- but merging data is idempotent enough that you mostly only run the risk of things appearing to be wrong for a moment before the next sync. the best fix for this problem though is the ETags feature. do you want to move that into this PR after all? for folders a manifest file makes sense because WebDAV folders don't have metadata like titles or parent relationships. deletions.json on the other hand is needed specifically because PROPFIND can't tell you what "used to exist." if you delete a notebook locally, PROPFIND on the server still shows it — you need to communicate "I intentionally deleted this" to other devices -- deletions.json is an implementation of tombstoning ( https://en.wikipedia.org/wiki/Tombstone_(data_store) ) . timestamps alone can't distinguish "never synced" from "deleted." re: the JSONs growing fast -- true, I think we should implement a truncation feature for deletions. folders.json shouldn't grow that fast. but it's probably safe to truncate the tombstone file for all records older than, say, 60 days. by that point all your devices should be caught up.
updates parentFolderId on the notebook, which bumps updatedAt. next sync uploads the updated manifest. works correctly with current design.
updates pageIds order in the manifest. same mechanism — updatedAt bumps, manifest re-uploads. also works.
we do upload individual page files (pages/{pageId}.json), but conflict resolution is at the notebook level (compare manifest updatedAt). so if Device A edits page 3 and Device B edits page 7, whichever device syncs last wins the entire notebook. this is a trade-off: true per-page sync would require per-page timestamps in the manifest and a page-level diff/merge — significantly more complex. given the target use case (personal device, rarely true multi-device concurrent editing), notebook-level LWW is a reasonable V1. we may be able to take advantage of ETags in the future to help this, though.
this is another tradeoff: folders are lightweight metadata (no binary content), so a single file avoids N extra PROPFIND/GET round-trips per folder. if we did it the way you're describing instead, we would incur a cost of potentially many, many HTTP calls to resolve, and WebDAV is notoriously slow at this at the server level.
this is actually in conformation with the spec: we only ignore 405 specifically on MKCOL — which is the correct behavior per RFC 4918. MKCOL returns 405 when the collection already exists. This is idempotent and safe.
made a bunch of edits to this effect and moved stuff into appendices. |
|
The main question I think is:
If it is possible, it will happen eventually, and possibly in the worst situation. Then debugging it is a pain, as it is not easily reproducible. Is there any option to fix it? You are right that we have to know somehow that something was deleted. Other then your solution, I see only one option: by ETags. If entity has the ETag from server then it was synced, and should be deleted.
Not for now, the timestamps are enough to get the implementation working. And I don't get it: how would ETags fix it?
I don't like this assumption, that after 60 days all the devices will be up to date. If we will truncate this data after 60 days, then if device last sync more then 60 days, it should be forced to check if everything is up to date.
In my proposition I would see it like this: So if we modify page2 (add stroke) in `/folder1/folder2/notebook1/page2' then we would do 4 PROPFIND, and then one GET. In your case:
So there is not that much of a difference. Yours gets more time consuming as there are more notebooks, my if there are a lot of nested folders. |
http://www.webdav.org/specs/rfc2518.html The spec support conflict resolution, I would say that every write to server should be behind If-Match if we already know what ETag should be, ie: we just fetched the file, and latter on, we have ETag saved in db.
I would say that architecture is ok, I'm not entirely convinced if it is the best we can do. I would really want to see how others are doing it. Also, when we switch to using ETags, should we deprecate 'deletions.json'? If lokal notebook has a ETag (that was fetched from server), but on server it does not exists, then it was deleted. I will try to slowly go throw code and review the details, starting from storage of singular notebook, as from architecture point of view it most likely won't change -- I don't see any other option for implementing it. |
Ethran
left a comment
There was a problem hiding this comment.
I read some files, added some comments.
SyncEngine and WebDaVClient are a quite big, I will need some time to go throw them.
Few remarks:
- You added SyncLogger class, but only sometimes you are using it, its inconsistent. Should it be this way?
- Why do you need to write your own serialization? I think that most of it can just reuse what db is doing right now.
.idea/codeStyles/codeStyleConfig.xml
Outdated
There was a problem hiding this comment.
remove all files from .idea folder from this PR
| if (pageId == null) return | ||
|
|
||
| try { | ||
| val appRepository = com.ethran.notable.data.AppRepository(context) |
There was a problem hiding this comment.
If this function is called frequently, we shouldn't initialize appRepository on every run.
| private var history: History, | ||
| private val state: EditorState | ||
| private val state: EditorState, | ||
| private val context: Context |
There was a problem hiding this comment.
To be consider:
do we need context or appRepository?
| page.disposeOldPage() | ||
|
|
||
| // Trigger sync on note close if enabled | ||
| val settings = GlobalAppSettings.current |
There was a problem hiding this comment.
why not syncSettings? Do you need other settings then related to sync?
| stream.close() | ||
| } catch (e: IOException) { | ||
| e.printStackTrace() | ||
| Log.e(TAG, "Failed to save shared image: ${e.message}", e) |
| id = folder.id, | ||
| title = folder.title, | ||
| parentFolderId = folder.parentFolderId, | ||
| createdAt = iso8601Format.format(folder.createdAt), |
There was a problem hiding this comment.
Why not just use default time format of db?
There was a problem hiding this comment.
ok, I see why: the db could change, and we can't migrate the server easily
| */ | ||
| fun serializeManifest(notebook: Notebook): String { | ||
| val manifestDto = NotebookManifestDto( | ||
| version = 1, |
There was a problem hiding this comment.
Is this the only change compere to Notebook from db (other then format of time)? Maybe we can inherit from Notebook, add version, and use default serialization?
| */ | ||
| fun serializePage(page: Page, strokes: List<Stroke>, images: List<Image>): String { | ||
| // Serialize strokes with embedded base64-encoded SB1 binary points | ||
| val strokeDtos = strokes.map { stroke -> |
There was a problem hiding this comment.
Again, we already have some serialization, and here only difference is encoding of time
| */ | ||
| object SyncScheduler { | ||
|
|
||
| private const val DEFAULT_SYNC_INTERVAL_MINUTES = 5L |
| import java.util.Locale | ||
|
|
||
| @Composable | ||
| fun SyncSettings(kv: KvProxy, settings: AppSettings, context: Context) { |
There was a problem hiding this comment.
The UI should be decouple from logic, I think that the best practice is to use ViewModel, but I'm not sure yet. I will try to open PR than will clean UI across the app today, but for now my code is a mess, we will see how it will go.
Either way, it should be done so one can use Preview annotation
There was a problem hiding this comment.
Here you can see where I'm going with, mostly I try to follow best practices for composable functions and decouple UI logic from everything else.
| return Result.retry() | ||
| } | ||
|
|
||
| // Check WiFi-only setting (WorkManager constraint already handles this, but be explicit) |
There was a problem hiding this comment.
There is todo left in the code from AI
| href.trimEnd('/').substringAfterLast('/') | ||
| } | ||
| .filter { filename -> | ||
| // Only include valid UUIDs |
There was a problem hiding this comment.
It's seems like comment left by AI
| message = "Are you sure you want to delete \"${book!!.title}\"?", | ||
| onConfirm = { | ||
| bookRepository.delete(bookId) | ||
| val deletedNotebookId = bookId |
| return packageInfo.versionName | ||
| } catch (e: PackageManager.NameNotFoundException) { | ||
| e.printStackTrace() | ||
| Log.e(TAG, "Package not found: ${e.message}", e) |
| this.lifecycleScope.launch(Dispatchers.IO) { | ||
| reencodeStrokePointsToSB1(this@MainActivity) | ||
| } | ||
|
|
There was a problem hiding this comment.
Move it to separate function, keep the onCreate short for easier navigating
|
|
||
| **Important**: Notable will automatically append `/notable` to your server URL to keep your data organized. For example: | ||
| - You enter: `https://nextcloud.example.com/remote.php/dav/files/username/` | ||
| - Notable creates: `https://nextcloud.example.com/remote.php/dav/files/username/notable/` |
There was a problem hiding this comment.
On checking of configuration app should notify user if notable folder already exist -- to prevent situations when user used syncing long time ago, and forgot about it. Simple text like: "found notable data, last updated X days ago", would be enough. You can add more information if you want.
Ethran
left a comment
There was a problem hiding this comment.
Another part. I still need to think if the code structure is ok.
| * @param wifiOnly If true, only run on unmetered (WiFi) connections | ||
| */ | ||
| fun enablePeriodicSync(context: Context, intervalMinutes: Long = DEFAULT_SYNC_INTERVAL_MINUTES, wifiOnly: Boolean = false) { | ||
| val networkType = if (wifiOnly) NetworkType.UNMETERED else NetworkType.CONNECTED |
There was a problem hiding this comment.
It might be confusing, as networks UNMETERED is different property then wifi. On some devices wifi networks can be marked as metered connection, I'm not sure if it the case for android.
| response.close() | ||
| } catch (e: Exception) { | ||
| // Ignore response close errors | ||
| } |
There was a problem hiding this comment.
duplicated code, and please at least log that it failed.
| .build() | ||
|
|
||
| client.newCall(request).execute().use { response -> | ||
| io.shipbook.shipbooksdk.Log.i("WebDAVClient", "Response code: ${response.code}") |
There was a problem hiding this comment.
I personally prefer to avoid writing full path of package, and just import it. The code looks nicer this way. Please add imports and change io.shipbook.shipbooksdk.Log to Log
| return try { | ||
| WebDAVClient(serverUrl, username, password).getServerTime() | ||
| } catch (e: Exception) { | ||
| null |
| notebooksDeleted = deletedCount | ||
|
|
||
| // 6. Sync Quick Pages (pages with notebookId = null) | ||
| // TODO: Implement Quick Pages sync |
| private suspend fun applyRemoteDeletions(webdavClient: WebDAVClient): DeletionsData { | ||
| SLog.i(TAG, "Applying remote deletions...") | ||
|
|
||
| val remotePath = "/$WEBDAV_ROOT_DIR/deletions.json" |
There was a problem hiding this comment.
I would put the deletions.json into separate variable or wrote a object that would keep all the names on server and return paths.
If it is used in only one place then it can stay that way, but otherwise it would be better to extract this string.
| */ | ||
| private suspend fun downloadPage(pageId: String, notebookId: String, webdavClient: WebDAVClient) { | ||
| // Download page JSON (contains embedded base64-encoded SB1 binary stroke data) | ||
| val pageJson = webdavClient.getFile("/$WEBDAV_ROOT_DIR/notebooks/$notebookId/pages/$pageId.json").decodeToString() |
There was a problem hiding this comment.
the path generation repeats in multiple places. I would put it into centralized function, so that its easy to change it. It's a loose suggestion, see whats makes sense.
I like to have function for those things, as then I can't make spelling mistakes, changing it is easier. Some times it doesn't make sense, but here I would consider adding object SyncFileStructure and inside functions getNotebookFolderPath('notebookID), getNotebookManifestPath(...), getPageDataPath(notebookId, pageId), etc.
| /** | ||
| * Detect MIME type from file extension. | ||
| */ | ||
| private fun detectMimeType(file: File): String { |
There was a problem hiding this comment.
Don't we have such function already somewhere in the code? If not it should be added in some utils file, as we have to detect type in other places also.
|
I added some custom instruction to copilot for reviewing code, let's see how well it will work now |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 28 out of 34 changed files in this pull request and generated 9 comments.
Files not reviewed (4)
- .idea/codeStyles/Project.xml: Language not supported
- .idea/codeStyles/codeStyleConfig.xml: Language not supported
- .idea/deploymentTargetSelector.xml: Language not supported
- .idea/markdown.xml: Language not supported
You can also share your feedback on Copilot code review. Take the survey.
| /** | ||
| * Parse ISO 8601 date string to Date object. | ||
| */ | ||
| private fun parseIso8601(dateString: String): Date { | ||
| return iso8601Format.parse(dateString) ?: Date() | ||
| } |
There was a problem hiding this comment.
parseIso8601() can throw if the server sends a malformed timestamp (SimpleDateFormat.parse throws ParseException). Since this data is remote/untrusted, this can abort the entire folder sync. NotebookSerializer.parseIso8601() already guards with try/catch; consider matching that pattern here (return a fallback Date or surface a controlled sync error) to keep sync robust.
| `SyncScheduler` enqueues a `PeriodicWorkRequest` with: | ||
| - Default interval: 5 minutes (configurable). | ||
| - Network constraint: `NetworkType.CONNECTED` (won't run without network). | ||
| - Policy: `ExistingPeriodicWorkPolicy.KEEP` (doesn't restart if already scheduled). | ||
|
|
There was a problem hiding this comment.
The WorkManager details here don’t match the current implementation: SyncScheduler uses ExistingPeriodicWorkPolicy.UPDATE (not KEEP), and SyncSettings defaults to a 15-minute interval (WorkManager minimum), while this text says 5 minutes. Please align these bullets with SyncScheduler.kt and SyncSettings defaults so the technical doc reflects real behavior.
.idea/deploymentTargetSelector.xml
Outdated
| <DropdownSelection timestamp="2025-12-19T23:05:28.223853994Z"> | ||
| <Target type="DEFAULT_BOOT"> | ||
| <handle> | ||
| <DeviceId pluginId="PhysicalDevice" identifier="serial=7b735d3b" /> |
There was a problem hiding this comment.
This file embeds a physical device identifier (serial=...) in the repo. Deployment target selections are user/machine-specific and can leak identifiers; please revert/remove this serialized device ID (and consider excluding deploymentTargetSelector.xml from version control if possible).
| <DeviceId pluginId="PhysicalDevice" identifier="serial=7b735d3b" /> | |
| <DeviceId pluginId="PhysicalDevice" /> |
| // Get sync settings and credentials | ||
| val settings = kvProxy.get(APP_SETTINGS_KEY, AppSettings.serializer()) | ||
| ?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR) | ||
|
|
||
| if (!settings.syncSettings.syncEnabled) { | ||
| return@withContext SyncResult.Success | ||
| } | ||
|
|
There was a problem hiding this comment.
uploadDeletion() doesn’t enforce the WiFi-only setting (syncSettings.wifiOnly). As written, a notebook delete can trigger network calls over mobile data even when the user enabled “Sync on WiFi only”. Consider applying the same WiFi-only gating logic used in syncAllNotebooks()/syncNotebook() (or explicitly documenting that deletions/force ops bypass WiFi-only).
| val credentialManager = CredentialManager(applicationContext) | ||
| if (!credentialManager.hasCredentials()) { | ||
| Log.w(TAG, "No credentials stored, skipping sync") | ||
| return Result.failure() |
There was a problem hiding this comment.
Returning Result.failure() when credentials are missing will put the periodic Work into a FAILED state; in WorkManager this can prevent future periodic runs even after the user later saves credentials. Since this is a non-transient “skip” condition, prefer Result.success() (or cancel/disable the periodic work) so periodic sync can resume automatically when credentials are configured.
| return Result.failure() | |
| return Result.success() |
| ## Privacy & Security | ||
|
|
||
| - **Credentials**: Stored securely in Android's CredentialManager (encrypted storage) | ||
| - **Data in transit**: Uses HTTPS for secure communication (recommended) | ||
| - **Data at rest**: Depends on your WebDAV provider's security |
There was a problem hiding this comment.
This guide says credentials are stored in Android’s CredentialManager, but the implementation stores them in EncryptedSharedPreferences via androidx.security.crypto (see CredentialManager.kt). Please update the documentation to reflect the actual storage mechanism to avoid misleading users evaluating security properties.
| val (settings, webdavClient) = initializeSyncClient() | ||
| ?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR) |
There was a problem hiding this comment.
If initializeSyncClient() returns null, syncAllNotebooks() returns early with CONFIG_ERROR after already setting SyncState.Syncing. That code path never updates syncState back to Idle/Error, so the UI can get stuck showing “syncing” even though the mutex is unlocked. Before returning early, set an appropriate SyncState (e.g., Error(CONFIG_ERROR) or Idle) to keep state machine consistent.
| val (settings, webdavClient) = initializeSyncClient() | |
| ?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR) | |
| val initResult = initializeSyncClient() | |
| if (initResult == null) { | |
| updateState( | |
| SyncState.Error( | |
| error = SyncError.CONFIG_ERROR, | |
| step = SyncStep.INITIALIZING, | |
| canRetry = false | |
| ) | |
| ) | |
| return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR) | |
| } | |
| val (settings, webdavClient) = initResult |
| suspend fun syncNotebook(notebookId: String): SyncResult = withContext(Dispatchers.IO) { | ||
| return@withContext try { | ||
| SLog.i(TAG, "Syncing notebook: $notebookId") | ||
|
|
||
| // Get sync settings and credentials |
There was a problem hiding this comment.
syncNotebook() does not acquire the global syncMutex that syncAllNotebooks() uses. This allows sync-on-close / page-switch syncs to run concurrently with a full sync (or other single-notebook syncs), which can lead to overlapping WebDAV operations and concurrent local DB writes. Consider using the same mutex (tryLock + SYNC_IN_PROGRESS) for syncNotebook() (and similarly for other entry points like uploadDeletion/force ops) to guarantee single-device sync exclusivity as intended by the design docs.
| /** | ||
| * Convert absolute file URI to relative path for WebDAV storage. | ||
| * Example: /storage/emulated/0/Documents/notabledb/images/abc123.jpg -> images/abc123.jpg | ||
| */ | ||
| private fun convertToRelativeUri(absoluteUri: String?): String { | ||
| if (absoluteUri == null) return "" | ||
|
|
||
| // Extract just the filename and parent directory | ||
| val file = File(absoluteUri) | ||
| val parentDir = file.parentFile?.name ?: "" | ||
| val filename = file.name | ||
|
|
There was a problem hiding this comment.
convertToRelativeUri() returns an empty string when the Image uri is null, and ImageDto.uri is non-nullable. This means downloaded images can end up stored with uri="" (non-null), and uploadPage() checks only image.uri != null, which would treat an empty string as a real path and attempt File("") operations. Consider making ImageDto.uri nullable (String?) and preserving nulls (or consistently treating blank as null on both serialize/deserialize + upload).
|
@jdkruzr please answer questions about architecture ( #189 (comment) ). The code requires a lot of cleaning, the settings UI will have to be changed a lot, but it's not a priority right now -- after merging the #214 (I hope to complete it this weekend). Mostly the issue with current approach is that it mixes the UI and other logic. Another change, depending on #214 . I think that moving forward it would be better use hilt: For now, I would ask you to check the review comments, and focus on improving code quality of purely sync related stuff. I think that in next two weeks it can be merged. And please, fix the race condition with After currently requested changes will be made, I will probably do some more cleaning etc, so you can expect some commits from me. |
d7ee456 to
3363131
Compare
|
ok, I asked Claude to collate and describe all the changes made in response to this feedback: Change log (what was done and why, by comment)Copilot 2889632453:
|
|
The main branch has to be merged into this, that will require:
If you will be using AI to do this, please add instruction to follow best programing practices for Kotlin Jetpack compose, and in general for Kotlin development. |
Are you sure that now it is consistent throughout the code?
PLEASE check the AI response before sending PR. |
There was a problem hiding this comment.
Also, when this version will get merged into main branch, then we will consider it a 1.0 version, so it shouldn't contain any already deprecated code, or migrations from older development versions.
I'm not sure what the AI did from the comments, but I suspect that it tried to do migration instead just abounding the old approach.
And I almost forgot: we still should use If-Mach for updating the files.
#189 (comment)
| └── PUT /notable/folders.json (merged result) | ||
|
|
||
| 3. APPLY REMOTE DELETIONS | ||
| ├── PROPFIND /notable/tombstones/ (Depth 1) → list of tombstone files with lastModified |
There was a problem hiding this comment.
Why not just name the folder deletions?


Outstanding Issues: