Add CloudKit-backed Reminders and Notes services with docs and examples#199
Add CloudKit-backed Reminders and Notes services with docs and examples#199MrJarnould wants to merge 3 commits intotimlaing:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a full Notes stack (CloudKit client, protobufs, decoder, renderer/exporter, datasource), replaces the legacy Reminders implementation with a CloudKit-backed Reminders service (client, proto, mappers, reads/writes, service), wires cloudkit_validation_extra into PyiCloudService, and adds examples/tests and packaging updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as rgba(66,133,244,0.5)
participant Service as rgba(0,153,68,0.5)
participant CKClient as rgba(255,153,51,0.5)
participant CloudKitAPI as rgba(153,51,255,0.5)
Client->>Service: call export_note / get / create / iter_changes
Service->>CKClient: build CKQuery / Modify / ZoneChanges request
CKClient->>CloudKitAPI: POST /ckdatabasews (query/modify/changes)
CloudKitAPI-->>CKClient: JSON response (records / tombstones / tokens)
CKClient-->>Service: validated Pydantic models (CKRecord / responses)
Service->>Service: map records → domain models / decode CRDT / parse protos
Service-->>Client: rendered HTML / Reminder objects / change events
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (23)
pyicloud/services/reminders/_reads.py-177-190 (1)
177-190:⚠️ Potential issue | 🟠 MajorNormalise reminder IDs before lookup to avoid false “not found” results.
get()currently sendsreminder_idas-is, while other lookup paths use_as_record_name(...). If callers pass an unprefixed ID, existing reminders can be missed.Suggested fix
def get(self, reminder_id: str) -> Reminder: """Fetch a single reminder by ID.""" + record_name = _as_record_name(reminder_id, "Reminder") resp = self._get_raw().lookup( - record_names=[reminder_id], + record_names=[record_name], zone_id=_REMINDERS_ZONE_REQ, ) _assert_read_success(resp.records, "Lookup reminder") target = None for rec in resp.records: - if isinstance(rec, CKRecord) and rec.recordName == reminder_id: + if isinstance(rec, CKRecord) and rec.recordName == record_name: target = rec break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/_reads.py` around lines 177 - 190, The lookup may miss reminders because reminder IDs are not normalized before calling lookup; update the code in the get() path (where resp = self._get_raw().lookup(...)) to normalize reminder_id with the existing _as_record_name(...) helper before using it in record_names and when comparing rec.recordName (e.g., set reminder_id = self._as_record_name(reminder_id) or equivalent), so both the lookup and the recordName comparison use the canonical record name.pyicloud/services/reminders/_reads.py-409-428 (1)
409-428:⚠️ Potential issue | 🟠 MajorUse
_as_record_namefor trigger IDs to avoid double-prefix mismatches.
alarms_for()hardcodesf"AlarmTrigger/{alarm.trigger_id}"in both lookup and map access. Iftrigger_idis already prefixed, this can produce malformed IDs and silently drop triggers.Suggested fix
- trigger_ids = [ - f"AlarmTrigger/{alarm.trigger_id}" for alarm in alarms if alarm.trigger_id - ] + trigger_ids = [ + _as_record_name(alarm.trigger_id, "AlarmTrigger") + for alarm in alarms + if alarm.trigger_id + ] @@ return [ AlarmWithTrigger( alarm=alarm, - trigger=trigger_map.get(f"AlarmTrigger/{alarm.trigger_id}"), + trigger=( + trigger_map.get(_as_record_name(alarm.trigger_id, "AlarmTrigger")) + if alarm.trigger_id + else None + ), ) for alarm in alarms ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/_reads.py` around lines 409 - 428, The code builds and looks up trigger IDs by hardcoding f"AlarmTrigger/{alarm.trigger_id}", which breaks when trigger_id already contains the prefix; fix by normalizing all trigger record names with the helper (use self._as_record_name): generate trigger_ids using self._as_record_name(alarm.trigger_id, "AlarmTrigger") in the list comprehension, store keys in trigger_map using the same normalized form for the mapped trigger (e.g. normalize trigger.id before assigning), and use self._as_record_name(alarm.trigger_id, "AlarmTrigger") when calling trigger_map.get to ensure consistent matching.examples/notes_cli.py-3-4 (1)
3-4:⚠️ Potential issue | 🟠 MajorDo not take the Apple ID password on argv.
Line 4 and the
--passwordflag encourage passing the secret on the command line, which leaks into shell history and process listings. The example already hasget_password(), so please keep the password on stdin/keyring instead, or add a--password-stdinflow if automation is needed.🔐 Safer direction
- uv run python examples/notes_cli.py --username you@example.com [--password ...] + uv run python examples/notes_cli.py --username you@example.com ... - p.add_argument( - "--password", - dest="password", - default="", - help="Apple ID password (optional; keyring if omitted)", - ) ... - pw = args.password or get_password(args.username) + pw = get_password(args.username)Also applies to: 42-47, 219-225
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notes_cli.py` around lines 3 - 4, The example currently encourages passing the Apple ID password on the command line via the --password flag and example usage; remove the --password argparse option and any direct use of args.password, update the top-of-file usage string to omit --password, and instead call the existing get_password() where the password is needed (or implement a --password-stdin flag that reads a single password from stdin when automation is required); also update other occurrences that reference args.password (including the blocks around the spots noted ~lines 42-47 and 219-225) to use get_password() or the stdin flow, and ensure help text and error messages reflect the new behavior.pyicloud/services/reminders/protobuf/versioned_document.proto-8-8 (1)
8-8:⚠️ Potential issue | 🟠 MajorMatch the proto package to the file path.
Buf is already flagging Line 8 with
PACKAGE_DIRECTORY_MATCH:package versioned_documentdoes not matchpyicloud/services/reminders/protobuf/. Unless this file is explicitly exempted, protobuf lint will fail. Please either move the file under a matchingversioned_document/directory or rename the package to follow the existing path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/protobuf/versioned_document.proto` at line 8, Buf lint is failing because the proto package declaration "package versioned_document" does not match the file path; fix by either moving this .proto into a directory named versioned_document (so the package and directory align) or change the package declaration to match the existing path (e.g., update the package line to mirror pyicloud/services/reminders/protobuf) and then update any imports/references that use versioned_document to the new package name to satisfy PACKAGE_DIRECTORY_MATCH.pyicloud/services/notes/rendering/attachments.py-96-124 (1)
96-124:⚠️ Potential issue | 🟠 MajorSanitise attachment URLs before writing HTML attributes.
These renderers copy CloudKit URLs straight into
href,src, anddata. Ajavascript:value, a protocol-relative URL, or a mixed-case scheme would still be emitted, and Line 181 only treats lowercasehttp(s)as remote. Please normalise once and allow-list only the schemes the exporter intentionally supports.🔒 One way to centralise it
+from urllib.parse import urlsplit + +def _safe_url(url: Optional[str], *, allowed_schemes: set[str]) -> Optional[str]: + if not url: + return None + parts = urlsplit(url) + if not parts.scheme: + return None if url.startswith("//") else url + if parts.scheme.lower() in allowed_schemes: + return url + return NoneThen run every
href/src/dataassignment through_safe_url(...)with the scheme set appropriate for that renderer.Also applies to: 127-143, 146-170, 175-226, 229-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/attachments.py` around lines 96 - 124, The code currently writes raw attachment URLs into attributes (see _UrlRenderer.render, using ctx.primary_url and ctx.base_attrs), which allows dangerous schemes like "javascript:" or protocol-relative URLs; add a single helper function (e.g. _safe_url(url: str, allowed_schemes: Iterable[str]) -> Optional[str]) that normalises (strip surrounding whitespace, lower-case scheme, remove control chars), rejects or returns None for disallowed schemes (only allowlisted schemes such as "http", "https", "mailto", "data" where appropriate), and returns a sanitized URL; then call this helper wherever URLs are emitted (in _UrlRenderer.render for href, and in the other renderers referenced in the comment that set href/src/data) and only include the attribute when _safe_url(...) returns a non-empty value, falling back to omitting the attribute or rendering a safe placeholder.examples/notes_cli.py-309-333 (1)
309-333:⚠️ Potential issue | 🟠 MajorGate raw note dumps behind an explicit debug flag.
console.print(item)and especiallyconsole.print(proto_note)run on every export. That makes the default path spill note contents into terminal scrollback and CI logs instead of only into the requested HTML files.🛡️ Possible fix
- console.rule(f"idx: {idx}") - console.print(item, end="\n\n") + if args.verbose or args.notes_debug: + console.rule(f"idx: {idx}") + console.print(item, end="\n\n") ... - console.print("proto_note:") - console.print(proto_note, end="\n\n") + if args.notes_debug: + console.print("proto_note:") + console.print(proto_note, end="\n\n")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notes_cli.py` around lines 309 - 333, The debug dumps of each note (console.print(item), console.rule(...), and console.print(proto_note)) should be gated behind an explicit debug/verbose flag; update the loop processing candidates (inside the for idx, item in enumerate(candidates) block) to check a runtime flag (e.g., args.debug or a module-level VERBOSE/DEBUG variable) before calling console.rule, console.print(item), and console.print(proto_note), leaving phase(...) and lookup logic unchanged so normal exports do not emit note contents to stdout/CI unless the debug flag is enabled.pyicloud/services/notes/rendering/attachments.py-64-93 (1)
64-93:⚠️ Potential issue | 🟠 MajorKeep a live fallback link when an asset URL exists.
_DefaultRendererand_TableRendereralways emit an<a>withouthref. For unknown UTIs, or a table whose mergeable payload cannot be rendered, that turns a perfectly goodprimary_urlinto a dead element in the export.🐛 Possible fix
- extra = {"class": "attachment link"} + extra = {"class": "attachment link"} + if ctx.primary_url: + extra["href"] = ctx.primary_urlApply the same change in both fallback renderers so
hrefis omitted only when there is genuinely no asset URL.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/attachments.py` around lines 64 - 93, Both _DefaultRenderer.render and _TableRenderer.render create fallback <a> elements without an href, which turns a valid primary_url into a dead link; update both render methods to set extra["href"] = ctx.primary_url (or equivalent) only when ctx.primary_url is present before calling ctx.base_attrs(extra) so href is omitted only if there is genuinely no asset URL. Ensure you change this in _DefaultRenderer.render and _TableRenderer.render and preserve the existing rel/referrerpolicy/target handling.pyicloud/services/notes/protobuf/notes.proto-1-4 (1)
1-4:⚠️ Potential issue | 🟠 MajorAdd a protobuf
packagedeclaration here.The file lacks a
packagestatement, which violates Buf'sPACKAGE_DEFINEDlinting rule. Following the repository's convention (seereminders.protoandversioned_document.proto), addpackage notes;after thesyntaxline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/protobuf/notes.proto` around lines 1 - 4, Add a protobuf package declaration to satisfy Buf's PACKAGE_DEFINED lint: after the existing syntax = "proto3"; line in notes.proto, declare the package using the repository convention (i.e., add package notes;). Ensure the new package statement is placed before any message/service definitions so the file follows the same pattern as reminders.proto and versioned_document.proto.pyicloud/services/notes/domain.py-15-18 (1)
15-18:⚠️ Potential issue | 🟠 Major
textis still required onNoteBody.In Pydantic v2,
Optional[str]without a default is still a required field. That meansNoteBody(bytes=...)will raise unless every caller passestext=Noneexplicitly. If "no extracted text" is a valid state, default it toNone. (docs.pydantic.dev)Suggested change
class NoteBody(FrozenServiceModel): bytes: bytes - text: Optional[str] + text: Optional[str] = None attachment_ids: List[AttachmentId] = Field(default_factory=list)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/domain.py` around lines 15 - 18, The NoteBody Pydantic model currently declares text: Optional[str] without a default which makes it required under Pydantic v2; change the NoteBody model to give text a default (e.g. text: Optional[str] = None or text: Optional[str] = Field(default=None)) so callers can omit text when there is no extracted text; update the NoteBody class declaration (subclassing FrozenServiceModel) to use that default.pyicloud/common/cloudkit/models.py-41-54 (1)
41-54:⚠️ Potential issue | 🟠 MajorTimestamp coercion is brittle for signed strings and out-of-range values.
Both helpers reject signed numeric strings via
isdigit(), and_from_secs_or_milliscan raise on malformed large values. That can break whole-record parsing instead of degrading gracefully.💡 Proposed hardening
def _from_millis_or_none(v): # Accept int/float or digit-only str; be strict about being milliseconds. if isinstance(v, (int, float)): iv = int(v) - elif isinstance(v, str) and v.isdigit(): - iv = int(v) - elif isinstance(v, str) and v.startswith("0001-01-01"): - # ISO-like sentinel for year 1 -> treat as None - return None + elif isinstance(v, str): + sv = v.strip() + if sv.startswith("0001-01-01"): + # ISO-like sentinel for year 1 -> treat as None + return None + if sv.lstrip("+-").isdigit(): + iv = int(sv) + else: + raise TypeError("Expected milliseconds since epoch as int or digit string") else: raise TypeError("Expected milliseconds since epoch as int or digit string") @@ def _from_secs_or_millis(v): if isinstance(v, (int, float)): iv = int(v) - elif isinstance(v, str) and v.isdigit(): - iv = int(v) + elif isinstance(v, str): + sv = v.strip() + if sv.lstrip("+-").isdigit(): + iv = int(sv) + else: + raise TypeError("Expected seconds or milliseconds since epoch as int/str") else: raise TypeError("Expected seconds or milliseconds since epoch as int/str") - # Heuristic: values < 1e11 are seconds (covers dates up to ~5138 CE) - if abs(iv) < 100_000_000_000: - return datetime.fromtimestamp(iv, tz=timezone.utc) - # Otherwise treat as milliseconds - return datetime.fromtimestamp(iv / 1000.0, tz=timezone.utc) + try: + # Heuristic: values < 1e11 are seconds (covers dates up to ~5138 CE) + if abs(iv) < 100_000_000_000: + return datetime.fromtimestamp(iv, tz=timezone.utc) + # Otherwise treat as milliseconds + return datetime.fromtimestamp(iv / 1000.0, tz=timezone.utc) + except (ValueError, OSError, OverflowError): + return NoneAlso applies to: 98-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/common/cloudkit/models.py` around lines 41 - 54, The timestamp coercion in _from_millis_or_none (and likewise in _from_secs_or_millis) should stop relying on str.isdigit() and instead accept signed/whitespace-padded numeric strings and guard against malformed or huge values: trim the string, allow an optional leading '+'/'-' when parsing, wrap int(...) conversion in try/except catching ValueError/OverflowError and return None on error, then keep the existing sentinel check against SENTINEL_ZERO_MS and CANONICAL_MIN_MS (returning None if matched); ensure negative or out-of-range ints are treated as None rather than raising.pyicloud/services/notes/client.py-100-103 (1)
100-103:⚠️ Potential issue | 🟠 MajorAdd bounded timeouts to outbound Notes HTTP calls.
These
post()and streamingget()calls use Requests defaults, so a stalled CloudKit API or asset download can block the caller indefinitely.Proposed fix
def post(self, path: str, payload: Dict) -> Dict: url = self._build_url(path) LOGGER.info("POST to %s", url) - resp = self._session.post(url, json=payload) + resp = self._session.post(url, json=payload, timeout=(10.0, 60.0)) code = getattr(resp, "status_code", 0) @@ def get_stream(self, url: str, chunk_size: int = 65536) -> Iterator[bytes]: LOGGER.info("GET stream from %s", url) - resp = self._session.get(url, stream=True) + resp = self._session.get(url, stream=True, timeout=(10.0, 60.0)) code = getattr(resp, "status_code", 0)Also applies to: 184-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/client.py` around lines 100 - 103, The post() and streaming get() methods are using requests without timeouts so calls can hang; update the post(self, path: str, payload: Dict) -> Dict and the streaming get() (the method around lines 184-186) to supply a bounded timeout to _session.post and _session.get (e.g., pass timeout=DEFAULT_TIMEOUT) by introducing a module/class-level DEFAULT_TIMEOUT or an instance attribute (e.g., self.timeout) and use _build_url to form the URL as today; ensure the timeout can be overridden by an optional parameter or instance config and apply the same pattern to both _session.post and _session.get streaming calls.pyicloud/services/notes/service.py-771-777 (1)
771-777:⚠️ Potential issue | 🟠 MajorOnly send canonical attachment record names to
/records/lookup.
lookup_idsmixes decoded-body aliases with actual CloudKit record names. Those aliases are not guaranteed to be lookupable, so one bad alias can make the batch fail and you then fall back torecords=[], dropping otherwise valid attachments. It also lets the same attachment appear twice because aliases are cached as separateAttachmentobjects instead of the canonical instance.Proposed fix
seen_lookup: set[str] = set() lookup_ids: List[str] = [] - for cid in lookup_candidates + ids: + for cid in lookup_candidates: if cid not in seen_lookup: seen_lookup.add(cid) lookup_ids.append(cid) @@ base_id = attachment.id aliases = self._attachment_aliases(rec_a, base_id) for alias in aliases: - if alias == base_id: - att_obj = attachment - else: - att_obj = Attachment( - id=alias, - filename=attachment.filename, - uti=attachment.uti, - size=attachment.size, - download_url=attachment.download_url, - preview_url=attachment.preview_url, - thumbnail_url=attachment.thumbnail_url, - ) - self._attachment_meta_cache[alias] = att_obj + self._attachment_meta_cache[alias] = attachmentAlso applies to: 822-847
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/service.py` around lines 771 - 777, The loop building lookup_ids currently mixes decoded-body alias IDs with canonical CloudKit record names (using variables seen_lookup, lookup_candidates, ids, lookup_ids), which can cause records/lookup to fail or duplicate attachments; change the collection logic to resolve each candidate/ID to its canonical record name (e.g., from the Attachment object or the record's 'recordName' field) before adding it, deduplicate on that canonical name (not the alias), and only send those canonical names to /records/lookup; apply the same change to the other similar block mentioned (lines ~822-847) so both places use canonical record names and de-duplicate correctly.pyicloud/services/notes/service.py-46-51 (1)
46-51:⚠️ Potential issue | 🟠 MajorUse a single public
NotesErrorbase across client and service.
NotesApiError,NotesAuthError, andNotesRateLimitedinherit frompyicloud.services.notes.client.NotesError, whileNoteNotFoundandNoteLockedErrorinherit from a differentNotesErrordeclared here.except NotesErrorwill therefore miss transport failures.Proposed fix
from .client import ( CloudKitNotesClient, + NotesError, NotesApiError, NotesAuthError, NotesRateLimited, ) @@ -class NotesError(Exception): - """Base Notes service error.""" - - class NoteNotFound(NotesError): passAlso applies to: 63-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/service.py` around lines 46 - 51, The file defines a different local NotesError than the client, so except NotesError misses transport errors; import the single NotesError from pyicloud.services.notes.client (alongside CloudKitNotesClient, NotesApiError, NotesAuthError, NotesRateLimited) and remove or stop redefining a separate NotesError in this module, then make NoteNotFound and NoteLockedError subclass the imported NotesError (and update any other local error classes that currently inherit the local NotesError to inherit the imported one) so all note-related errors share the same public base.pyicloud/services/notes/client.py-94-103 (1)
94-103:⚠️ Potential issue | 🟠 MajorRedact CloudKit query strings and signed asset URLs before logging.
_build_url()appends the client params to the URL, and assetdownloadURLs are signed. Logging them verbatim at info level leaks credentials into application logs.Proposed fix
class _CloudKitClient: + `@staticmethod` + def _redact_url(url: str) -> str: + from urllib.parse import urlsplit, urlunsplit + + parts = urlsplit(url) + return urlunsplit((parts.scheme, parts.netloc, parts.path, "", "")) + def post(self, path: str, payload: Dict) -> Dict: url = self._build_url(path) - LOGGER.info("POST to %s", url) + LOGGER.info("POST to %s", self._redact_url(url)) resp = self._session.post(url, json=payload, timeout=(10.0, 60.0)) @@ def get_stream(self, url: str, chunk_size: int = 65536) -> Iterator[bytes]: - LOGGER.info("GET stream from %s", url) + LOGGER.info("GET stream from %s", self._redact_url(url)) resp = self._session.get(url, stream=True, timeout=(10.0, 60.0)) @@ - LOGGER.info("Downloading asset from %s to directory %s", url, directory) + LOGGER.info( + "Downloading asset from %s to directory %s", + self._http._redact_url(url), + directory, + )Also applies to: 184-186, 345-346
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/client.py` around lines 94 - 103, The code currently logs full URLs (constructed in _build_url) which include client query params and signed asset downloadURL values, leaking credentials; update _build_url to return the base URL without sensitive query params (or add a helper like _redacted_url) and change logging in post (and other callers of _build_url such as any GET/POST spots) to log the redacted form via LOGGER.info; specifically, ensure LOGGER.info("POST to %s", ...) uses a redacted URL (strip or mask all query string values, and when encountering asset downloadURL parameters or JSON payload fields named downloadURL/assetUrl, replace their values with a fixed mask like "<redacted>") so signed tokens never appear in logs while keeping non-sensitive path information.example_reminders.py-263-269 (1)
263-269:⚠️ Potential issue | 🟠 MajorInitialise
apibefore thetry.With
--cleanup, any failure before theauthenticate()assignment leavesapiundefined, and thefinallyblock then raisesUnboundLocalError, hiding the real problem.Suggested fix
def main() -> int: args = parse_args() tracker = ValidationTracker() state = RunState() + api: Optional[PyiCloudService] = None try: api = authenticate(args) reminders_api = api.reminders @@ finally: # Cleanup always runs if requested, even after failures. - if args.cleanup: + if args.cleanup and api is not None: try: cleanup_generated(api, state) except Exception as cleanup_exc: # pragma: no cover print(f"[WARN] Cleanup failed: {cleanup_exc}")Also applies to: 1053-1058
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example_reminders.py` around lines 263 - 269, The variable api must be initialized before the try so the finally block can safely reference it even if authenticate(args) raises; in main() set a default (e.g., api = None) before entering the try that calls authenticate(args), and update the finally block to check that api is not None (or truthy) before using/closing it; locate main(), the authenticate(args) call and the finally block to apply this change (also replicate the same initialization pattern around the code at the other occurrence around lines noted for the same issue).pyicloud/services/notes/rendering/table_builder.py-101-110 (1)
101-110:⚠️ Potential issue | 🟠 MajorKeep
axis.totalin step with remapped positions.The
ordering.contentspath can fillaxis.indiceswithout increasingaxis.total.render_table_from_mergeable()then skips buffer initialisation but still parses cell columns, so the first resolved cell write can hitIndexErroron a 0×0 grid.Suggested fix
pos = axis.indices.get(k_idx, axis.indices.get(v_idx, 0)) # type: ignore[arg-type] axis.indices[v_idx] = pos + axis.total = max(axis.total, pos + 1) @@ - if pending_cell_columns: + if pending_cell_columns is not None and tb.rows.total > 0 and tb.cols.total > 0: tb.parse_cell_columns(pending_cell_columns)Also applies to: 239-242
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/table_builder.py` around lines 101 - 110, The axis indices are being populated from ordering contents but axis.total is not updated, which can leave a 0×0 grid and cause IndexError later in render_table_from_mergeable; update axis.total whenever you assign a position into axis.indices (in the loop that iterates entry.ordered_set.ordering.contents.element in table_builder.py) by ensuring axis.total = max(axis.total, pos + 1) after computing pos and before setting axis.indices[v_idx] = pos, and apply the same change to the other analogous block around the lines referenced (the block at 239-242) so total always reflects the highest occupied index.pyicloud/services/reminders/protobuf/reminders.proto-3-3 (1)
3-3:⚠️ Potential issue | 🟠 MajorFix the package/directory mismatch before this lands.
Buf is already flagging
PACKAGE_DIRECTORY_MATCH, so this schema will fail lint whilepackage topotext;sits underpyicloud/services/reminders/protobuf. Either move the file under atopotext/directory or exempt this extracted schema in Buf.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/protobuf/reminders.proto` at line 3, The proto file declares "package topotext;" but lives under pyicloud/services/reminders/protobuf which triggers Buf's PACKAGE_DIRECTORY_MATCH rule; fix by either moving this proto into a topotext/ directory so the package path matches the filesystem, or update your Buffile/buf.yaml to exempt this extracted schema from PACKAGE_DIRECTORY_MATCH (or change the package to match the current directory) so linting passes; reference the package declaration "package topotext;" and the Buf rule "PACKAGE_DIRECTORY_MATCH" when making the change.pyicloud/services/reminders/_writes.py-348-371 (1)
348-371:⚠️ Potential issue | 🟠 Major
update()silently drops most editable reminder fields.Only title, notes and completed are persisted here. Mutations to
due_date,priority,flagged,all_day,time_zoneandparent_reminder_idare ignored, so callers get a successful return with stale server state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/_writes.py` around lines 348 - 371, The update() method only persists title/notes/completed; include all editable reminder fields (due_date, priority, flagged, all_day, time_zone, parent_reminder_id) in both fields_mod and the fields dict so they are sent to the server via _submit_single_record_update; map each to the correct iCloud field keys and types (e.g. "DueDate"/"LastModifiedDate" as TIMESTAMP with epoch ms, "Priority"/"Flagged"/"AllDay" as INT64 booleans/integers, "TimeZone" and "ParentReminderId" as STRING), ensure values are converted (timestamps -> int(time.time()*1000 or due_date epoch ms, booleans -> 1/0), and include them when generating token_map with _generate_resolution_token_map so the server receives and resolves these changes for the Reminder object passed to update().pyicloud/services/notes/rendering/renderer.py-557-567 (1)
557-567:⚠️ Potential issue | 🟠 MajorAllow-list link schemes before rendering anchors.
Escaping the URL only protects the HTML syntax; it still renders
javascript:ordata:links as clickable anchors. Exported notes should drop or de-link anything outside safe schemes such ashttp,https,mailtoandtel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/renderer.py` around lines 557 - 567, The code currently always renders sig.link into an <a> which allows unsafe schemes; before wrapping styled with an anchor, parse sig.link (e.g. using urllib.parse.urlparse) and allow only a whitelist of schemes (http, https, mailto, tel); if the parsed scheme (lowercased) is not in that set then do not render a clickable anchor—output the escaped URL/text (html.escape(sig.link)) or a non-clickable element instead; keep existing config handling (link_rel/referrer_policy) and only apply it when the scheme check passes.pyicloud/services/notes/rendering/renderer.py-681-705 (1)
681-705:⚠️ Potential issue | 🟠 MajorOpen a new
<li>, not a new list container, when promoting spacer items.
list_stack[-1]["tag"]isul/ol, but both branches emit it as though it were the next list item. The close logic still appends</li>, so spacer-to-text transitions generate malformed HTML and broken nesting.Suggested fix
- raw_tag = list_stack[-1]["tag"] - fragments.append(f"<{raw_tag}>") + fragments.append("<li>") list_stack[-1]["li_open"] = True list_stack[-1]["li_index"] = len(fragments) - 1 list_stack[-1]["li_has_content"] = False @@ - tag_name = list_stack[-1]["tag"] - fragments.append(f"<{tag_name}>") + fragments.append("<li>") list_stack[-1]["li_open"] = True list_stack[-1]["li_index"] = len(fragments) - 1 list_stack[-1]["li_has_content"] = FalseAlso applies to: 793-815
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/renderer.py` around lines 681 - 705, The code currently uses list_stack[-1]["tag"] (which is 'ul'/'ol') as the tag when opening a promoted spacer item, producing malformed nesting; change the logic in the renderer function handling spacer-to-text promotion so it appends a new "<li>" (not "<ul>" or "<ol>") and later closes with "</li>", update list_stack[-1]["li_open"] and list_stack[-1]["li_index"] accordingly, and preserve insertion of the checkbox HTML when mr.sig.style_type == StyleType.CHECKBOX; apply the same fix to the identical block around the other occurrence noted (the block around lines 793-815).pyicloud/services/notes/rendering/exporter.py-42-47 (1)
42-47:⚠️ Potential issue | 🟠 MajorReturn
Noneon parse failures as documented.A bad
TextDataEncryptedpayload currently escapes from this helper and aborts the export, even though the docstring says unparseable bodies are a non-fatalNone.Suggested fix
- nb = BodyDecoder().decode(raw) - if not nb or not getattr(nb, "bytes", None): - return None - msg = pb.NoteStoreProto() - msg.ParseFromString(nb.bytes) - return getattr(getattr(msg, "document", None), "note", None) + try: + nb = BodyDecoder().decode(raw) + if not nb or not getattr(nb, "bytes", None): + return None + msg = pb.NoteStoreProto() + msg.ParseFromString(nb.bytes) + return getattr(getattr(msg, "document", None), "note", None) + except Exception: + return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/exporter.py` around lines 42 - 47, The function currently calls msg.ParseFromString(nb.bytes) which can raise on malformed TextDataEncrypted payloads and thus should be treated as a non-fatal parse failure; wrap the ParseFromString (and the subsequent access to msg.document.note) in a try/except that catches parsing errors (e.g., Exception or protobuf parsing errors) and returns None on failure so the helper adheres to its docstring; locate the BodyDecoder().decode(raw) usage, the nb/nb.bytes check, and the pb.NoteStoreProto()/msg.ParseFromString call and modify that block to return None when parsing raises instead of letting the exception propagate.pyicloud/services/notes/rendering/exporter.py-507-513 (1)
507-513:⚠️ Potential issue | 🟠 MajorKeep
filenameinsideout_dir.Joining the caller-supplied
filenamedirectly lets path components escape the export root, e.g.../note.html.Suggested fix
os.makedirs(out_dir, exist_ok=True) page = render_note_page(title, html_fragment) if full_page else html_fragment fname = filename or f"{_safe_name(title)}.html" - path = os.path.join(out_dir, fname) + root = os.path.abspath(out_dir) + path = os.path.abspath(os.path.join(root, fname)) + if os.path.commonpath([root, path]) != root: + raise ValueError("filename must stay within out_dir") with open(path, "w", encoding="utf-8") as f: f.write(page) return path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/exporter.py` around lines 507 - 513, The current export writes a caller-supplied filename directly into path allowing path traversal; ensure the final file stays inside out_dir by normalizing and restricting filename: if a filename is provided, strip any directory components (e.g., use os.path.basename) or compute candidate = os.path.normpath(os.path.join(out_dir, filename)) and validate it with os.path.commonpath([out_dir, candidate]) == os.path.abspath(out_dir) before opening; fall back to f"{_safe_name(title)}.html" when validation fails or filename is None. Update the code around variables out_dir, filename, fname, and path (and where render_note_page is used) to enforce this check.pyicloud/services/reminders/service.py-4-21 (1)
4-21:⚠️ Potential issue | 🟠 MajorKeep the legacy entry points or migrate callers in this PR.
This replacement turns
listsinto a bound method and dropsrefresh()/post()entirely, which is exactly what the current pipeline is failing on. Because the class still sits on the existing reminders service path, downstream code cannot upgrade incrementally; please either keep a thin compatibility adapter here or land the caller/test migration alongside this change.Also applies to: 68-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/service.py` around lines 4 - 21, The new RemindersService changed the API (made lists a bound method and removed refresh()/post()), breaking callers; restore backward compatibility by adding thin adapter methods on RemindersService named lists (keeping previous signature/behavior), refresh, and post that internally delegate to the new implementations (e.g., call self.lists() or the new sync/list_reminders/post-equivalent methods) so existing callers continue to work, or alternatively update all callers/tests to use the new bound method names; specifically locate RemindersService and reintroduce methods RemindersService.lists(), RemindersService.refresh(), and RemindersService.post() that forward to the new functions (and preserve return types and side effects) so downstream code does not break.
🟡 Minor comments (7)
README.md-888-889 (1)
888-889:⚠️ Potential issue | 🟡 MinorGuard
next(iter(...))calls in examples to avoidStopIteration.A few snippets assume at least one list/reminder exists. In empty accounts, these examples will crash on first run.
Suggested doc tweak pattern
-target_list = next(iter(reminders.lists())) +target_list = next(iter(reminders.lists()), None) +if target_list is None: + raise RuntimeError("No reminder lists found")-reminder = next(iter(reminders.reminders())) +reminder = next(iter(reminders.reminders()), None) +if reminder is None: + raise RuntimeError("No reminders found")Also applies to: 914-915, 960-960, 982-982
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 888 - 889, The example uses target_list = next(iter(reminders.lists())) which will raise StopIteration for empty accounts; change examples to guard the call by either converting to a sequence and checking it's non-empty (e.g., lists = list(reminders.lists()); if not lists: handle-empty-case; target_list = lists[0]) or use next(iter(reminders.lists()), None) and handle None, and apply the same fix for the other occurrences (the snippets that call next(iter(reminders.lists())) or similar).pyicloud/base.py-939-947 (1)
939-947:⚠️ Potential issue | 🟡 MinorNormalise missing
ckdatabasewsfailures in both accessors.If
get_webservice_url("ckdatabasews")fails here,api.remindersandapi.notescurrently leakPyiCloudServiceNotActivatedExceptioninstead of the service-specificPyiCloudServiceUnavailablewrapper. That makes service-unavailable handling inconsistent with the rest of this facade.Suggested change
if not self._reminders: - service_root: str = self.get_webservice_url("ckdatabasews") try: + service_root: str = self.get_webservice_url("ckdatabasews") self._reminders = RemindersService( service_root=service_root, session=self.session, params=self.params, cloudkit_validation_extra=self._cloudkit_validation_extra, ) - except (PyiCloudAPIResponseException,) as error: + except ( + PyiCloudAPIResponseException, + PyiCloudServiceNotActivatedException, + ) as error: raise PyiCloudServiceUnavailable( "Reminders service not available" ) from errorif not self._notes: try: service_root: str = self.get_webservice_url("ckdatabasews") self._notes = NotesService( service_root=service_root, session=self.session, params=self.params, cloudkit_validation_extra=self._cloudkit_validation_extra, ) - except (PyiCloudAPIResponseException,) as error: + except ( + PyiCloudAPIResponseException, + PyiCloudServiceNotActivatedException, + ) as error: raise PyiCloudServiceUnavailable( "Notes service not available" ) from errorAlso applies to: 976-988
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/base.py` around lines 939 - 947, The reminders and notes accessors call get_webservice_url("ckdatabasews") and if that call raises PyiCloudServiceNotActivatedException it currently bubbles up; instead catch that exception around the get_webservice_url call in the _reminders and _notes lazy-initialiser blocks (the code that assigns self._reminders = RemindersService(...) and self._notes = NotesService(...)) and raise a PyiCloudServiceUnavailable wrapping the original exception (include the original exception as context) so api.reminders and api.notes present the same service-unavailable error type as other facades; apply the same change to the analogous block that initialises _notes (the code referenced in the comment for lines 976-988).tests/test_notes.py-208-216 (1)
208-216:⚠️ Potential issue | 🟡 MinorUse a platform temp directory instead of a fixed
/tmppath.Line [208] is environment-specific and can make CI flaky on non-Unix runners.
🛠️ Proposed portability fix
import importlib import os +import tempfile import unittest @@ - output_path = "/tmp/python-test-results/notes-export/note.html" + output_path = os.path.join( + tempfile.gettempdir(), "python-test-results", "notes-export", "note.html" + ) @@ - exported = self.service.export_note( - "Note/1", "/tmp/python-test-results/notes-export" - ) + exported = self.service.export_note( + "Note/1", + os.path.join( + tempfile.gettempdir(), "python-test-results", "notes-export" + ), + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes.py` around lines 208 - 216, Replace the hard-coded "/tmp/python-test-results/notes-export" and output_path with a platform-independent temporary directory: use tempfile.TemporaryDirectory (or tempfile.mkdtemp) to create a temp dir, build output_path from that temp dir (e.g., Path(tempdir) / "note.html"), and pass that tempdir to self.service.export_note; update the patch target and the mocked return_value accordingly so the test uses the temporary directory instead of the fixed "/tmp" path (references: tests/test_notes.py, NoteExporter.export, self.service.export_note).tests/test_notes_rendering.py-195-198 (1)
195-198:⚠️ Potential issue | 🟡 MinorAvoid hard-coding
/tmpfor test output paths.Line [196] is Unix-specific and can fail on non-Unix runners; use the platform temp directory.
🛠️ Proposed portability fix
import json import os +import tempfile import unittest @@ def _output_dir(self, name): - path = os.path.join("/tmp/python-test-results", "notes-rendering", name) + path = os.path.join( + tempfile.gettempdir(), "python-test-results", "notes-rendering", name + ) os.makedirs(path, exist_ok=True) return path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes_rendering.py` around lines 195 - 198, The helper function _output_dir in tests/test_notes_rendering.py currently hardcodes "/tmp", which is non‑portable; change it to use the platform temp directory (e.g., tempfile.gettempdir() or tempfile.mkdtemp()) and build the path with os.path.join so tests run on non‑Unix runners, and add the required import for tempfile if missing.pyicloud/services/reminders/__init__.py-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorRemove unnecessary eager
tzlocalimport from the reminders package API.Line 3 imports
get_localzone_namewith the# noqa: F401flag, indicating it is unused. The symbol is not included in__all__and is not used within the reminders service. This unnecessary import is executed whenever thepyicloud.services.remindersmodule is imported, adding an avoidable dependency load.♻️ Suggested fix
-from tzlocal import get_localzone_name # noqa: F401Remove the import. If timezone resolution is needed in the reminders service implementation in the future, import
tzlocalat the point of use (inside the relevant service method), not at the package level.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/__init__.py` at line 3, Remove the unnecessary eager import of get_localzone_name from tzlocal in the pyicloud.services.reminders package API; specifically delete the top-level import of get_localzone_name in reminders/__init__.py (it is not in __all__ nor used), and if timezone resolution is later required, import tzlocal.get_localzone_name lazily inside the specific reminders service function/method that needs it rather than at module import time.pyicloud/services/notes/rendering/options.py-57-69 (1)
57-69:⚠️ Potential issue | 🟡 Minor
is_image_utimatching is not fully case-insensitive and suppresses configuration errors.The method lowercases only the input UTI; caller-provided prefixes and exacts are not normalised. If configuration values are provided in mixed case (e.g.,
ExportConfig(image_uti_exacts=("Public.JPEG",))), the matching will silently fail. Additionally, the broad exception suppression at lines 63–67 hides configuration errors, such as non-string values being passed.🛠️ Proposed fix
def is_image_uti(self, uti: Optional[str]) -> bool: if not uti: return False - u = uti.lower() - # Prefixes - for p in self.image_uti_prefixes or ("public.image",): - try: - if u.startswith(p): - return True - except Exception: - pass - # Exact forms - return u in (self.image_uti_exacts or ()) + u = uti.casefold() + prefixes = tuple( + p.casefold() for p in (self.image_uti_prefixes or ("public.image",)) + ) + if any(u.startswith(p) for p in prefixes): + return True + exacts = tuple(e.casefold() for e in (self.image_uti_exacts or ())) + return u in exacts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/options.py` around lines 57 - 69, The is_image_uti method lowercases only the input UTI but not the caller-provided image_uti_prefixes and image_uti_exacts and also hides configuration problems by catching all exceptions; update is_image_uti to normalize configured prefixes and exacts to lowercase before matching, and replace the broad try/except with explicit type checks that raise a clear TypeError (or ValueError) if a non-string is found in self.image_uti_prefixes or self.image_uti_exacts; specifically, in is_image_uti iterate over (self.image_uti_prefixes or ("public.image",)) and for each item assert isinstance(item, str) then compare u.startswith(item.lower()), and for exact matches compare u against a lowercased tuple of (self.image_uti_exacts or ()), raising on invalid types instead of silencing them.pyicloud/services/reminders/_protocol.py-83-90 (1)
83-90:⚠️ Potential issue | 🟡 MinorTreat malformed base64 CRDT payloads as undecodable input.
Line 90 can still raise
binascii.Errorfor non-base64 strings, so this helper skips its later""fallback on malformed payloads.Proposed fix
def _decode_crdt_document(encrypted_value: str | bytes) -> str: """Decode a CRDT document (TitleDocument or NotesDocument).""" data = encrypted_value if isinstance(data, str): padding = 4 - (len(data) % 4) if padding != 4: data += "=" * padding - data = base64.b64decode(data) + try: + data = base64.b64decode(data) + except (binascii.Error, ValueError): + return ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/_protocol.py` around lines 83 - 90, In _decode_crdt_document, base64.b64decode can raise binascii.Error for malformed strings and leave data as a str, skipping the later empty-byte fallback; wrap the b64 decoding in a try/except catching binascii.Error (and ValueError if desired) and on exception set data = b"" (or otherwise treat as undecodable) so the function continues with the intended empty-byte fallback behavior; update the block inside _decode_crdt_document where base64.b64decode is called to handle this exception.
🧹 Nitpick comments (2)
tests/test_notes.py (1)
34-37: Replace the placeholder test with an explicit skip or a real assertion.A
passtest always succeeds and can mask missing coverage.If you want, I can draft a concrete fixture-driven assertion for this test.✅ Minimal explicit-skip alternative
+ `@unittest.skip`("TODO: implement once representative note fixture is available") def test_get_note(self): """Test getting a note.""" - # This test will be implemented once sample data is available. - pass + ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes.py` around lines 34 - 37, The test test_get_note currently contains only a pass which silently succeeds; replace it with either an explicit skip (e.g., call pytest.skip with a short reason at the start of test_get_note) or implement a real assertion using the appropriate fixture/mock (e.g., fetch a sample note via the existing test fixtures and assert expected fields). Update test_get_note to use pytest.skip("reason") if you want to mark it intentionally pending, or add a concrete fixture-driven assertion against the note retrieval function to validate behavior.tests/test_notes_rendering.py (1)
85-94: Remove the no-op patch block to keep the test intent clear.This block does not influence behaviour and makes the test harder to follow.
♻️ Proposed cleanup
- # Patch CKRecord in exporter so isinstance(mock, CKRecord) passes - with unittest.mock.patch( - "pyicloud.services.notes.rendering.exporter.CKRecord", autospec=True - ) as MockCKRecord: - # We need note_rec to be an instance of this patched class - MockCKRecord.return_value = note_rec - # Actually, isinstance(byte_obj, MockClass) works if we set __class__ maybe? - # Easier: just bypass decode_and_parse_note manually since we are testing rendering, not decoding validation. - pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes_rendering.py` around lines 85 - 94, Remove the no-op unittest.mock.patch block that patches "pyicloud.services.notes.rendering.exporter.CKRecord" (the with patch... as MockCKRecord ... pass block) since it does nothing and obscures intent; instead delete that entire patch scope and any related MockCKRecord.return_value assignment, and if the test needs to skip decoding validation keep the explicit bypass of decode_and_parse_note or set up note_rec directly for rendering tests so the test exercises the rendering path in exporter without the unused CKRecord patch.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e78fd615-9f4d-4eb0-bc49-87d0b18ff090
📒 Files selected for processing (61)
README.mdexample_reminders.pyexample_reminders_delta.pyexamples/notes_cli.pypyicloud/base.pypyicloud/common/__init__.pypyicloud/common/cloudkit/__init__.pypyicloud/common/cloudkit/base.pypyicloud/common/cloudkit/models.pypyicloud/common/models.pypyicloud/services/__init__.pypyicloud/services/notes/__init__.pypyicloud/services/notes/client.pypyicloud/services/notes/decoding.pypyicloud/services/notes/domain.pypyicloud/services/notes/models/__init__.pypyicloud/services/notes/models/constants.pypyicloud/services/notes/models/dto.pypyicloud/services/notes/protobuf/__init__.pypyicloud/services/notes/protobuf/notes.protopyicloud/services/notes/protobuf/notes_pb2.pypyicloud/services/notes/protobuf/notes_pb2.pyipyicloud/services/notes/rendering/__init__.pypyicloud/services/notes/rendering/attachments.pypyicloud/services/notes/rendering/ck_datasource.pypyicloud/services/notes/rendering/debug_tools.pypyicloud/services/notes/rendering/exporter.pypyicloud/services/notes/rendering/options.pypyicloud/services/notes/rendering/renderer.pypyicloud/services/notes/rendering/renderer_iface.pypyicloud/services/notes/rendering/table_builder.pypyicloud/services/notes/service.pypyicloud/services/reminders.pypyicloud/services/reminders/__init__.pypyicloud/services/reminders/_constants.pypyicloud/services/reminders/_mappers.pypyicloud/services/reminders/_protocol.pypyicloud/services/reminders/_reads.pypyicloud/services/reminders/_support.pypyicloud/services/reminders/_writes.pypyicloud/services/reminders/client.pypyicloud/services/reminders/models/__init__.pypyicloud/services/reminders/models/domain.pypyicloud/services/reminders/models/results.pypyicloud/services/reminders/protobuf/__init__.pypyicloud/services/reminders/protobuf/reminders.protopyicloud/services/reminders/protobuf/reminders_pb2.pypyicloud/services/reminders/protobuf/typedef.jsonpyicloud/services/reminders/protobuf/typedef.pypyicloud/services/reminders/protobuf/versioned_document.protopyicloud/services/reminders/protobuf/versioned_document_pb2.pypyicloud/services/reminders/service.pypyproject.tomlrequirements.txtrequirements_test.txttests/fixtures/note_fixture.jsontests/services/test_reminders_cloudkit.pytests/test_base.pytests/test_notes.pytests/test_notes_cli.pytests/test_notes_rendering.py
💤 Files with no reviewable changes (1)
- pyicloud/services/reminders.py
examples/notes_cli.py
Outdated
| "deviceName", | ||
| f"SMS to {device.get('phoneNumber', 'unknown')}", | ||
| ) | ||
| logger.info(" %d: %s", i, name) |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (20)
buf.yaml (1)
6-10: Add brief rationale comments for each ignored proto.A one-line reason per entry will prevent accidental “cleanup” of intentional wire-format package names.
Suggested diff
lint: use: - DEFAULT ignore_only: PACKAGE_DIRECTORY_MATCH: + # Intentional: package name mirrors Apple Notes wire format (`notes`). - pyicloud/services/notes/protobuf/notes.proto + # Intentional: package name mirrors Apple Reminders wire format (`topotext`). - pyicloud/services/reminders/protobuf/reminders.proto + # Intentional: package name mirrors Apple Reminders wire format (`versioned_document`). - pyicloud/services/reminders/protobuf/versioned_document.proto🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@buf.yaml` around lines 6 - 10, Add a one-line rationale comment for each ignored proto entry under ignore_only:PACKAGE_DIRECTORY_MATCH (e.g., for pyicloud/services/notes/protobuf/notes.proto, pyicloud/services/reminders/protobuf/reminders.proto, and pyicloud/services/reminders/protobuf/versioned_document.proto) explaining why the wire-format package name must be preserved; place the brief comment directly above or beside each filename so future reviewers see the intentional exception.pyicloud/services/notes/client.py (2)
429-438: Type ignore comment masks potential runtime issue.The inline
# type: ignore[dict-item]comment on line 432 suppresses a type error where a plain dict is passed instead of aCKZoneIDmodel. Consider constructing the proper model instance for type safety.♻️ Proposed fix
req = CKZoneChangesRequest( zones=[ CKZoneChangesZoneReq( - zoneID={"zoneName": zone_name, "zoneType": "REGULAR_CUSTOM_ZONE"}, # type: ignore[dict-item] + zoneID=CKZoneID(zoneName=zone_name, zoneType="REGULAR_CUSTOM_ZONE"), desiredRecordTypes=[], desiredKeys=[], reverse=False, ) ] )Note: You'll need to import
CKZoneIDfrom the cloudkit module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/client.py` around lines 429 - 438, The code passes a plain dict into CKZoneChangesZoneReq.zoneID and silences the type error with "# type: ignore[dict-item]"; replace that dict with a proper CKZoneID instance (import CKZoneID) and construct it using zone_name (e.g., CKZoneID(zoneName=zone_name, zoneType="REGULAR_CUSTOM_ZONE")) when building the CKZoneChangesRequest/CKZoneChangesZoneReq so type checking and runtime behavior are correct.
420-425: Redundantpassafter exception handling.The
passstatement after the exception handling block is unnecessary and can be removed.♻️ Proposed fix
except Exception as e: LOGGER.warning( "Failed to get sync token via query, falling back. Error: %s", e ) - # ignore and fall back - pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/client.py` around lines 420 - 425, Remove the redundant "pass" inside the except block that logs failures to get the sync token (the block starting with "except Exception as e:" which calls LOGGER.warning("Failed to get sync token via query, falling back. Error: %s", e)); simply let the except block finish after the LOGGER.warning call without the trailing pass so no-op is not duplicated.tests/services/test_reminders.py (1)
13-22: Service initialisation test could verify internal component wiring.The test verifies callable attributes exist but doesn't assert that
_readsand_writesare properly instantiated. Consider adding assertions for these internal components if they're part of the public contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/services/test_reminders.py` around lines 13 - 22, The test_reminders_service_init currently only asserts callability of facade methods; update it to verify the internal wiring by asserting that the RemindersService instance has its _reads and _writes attributes properly instantiated (e.g., assert service._reads is not None and assert service._writes is not None, or use isinstance(service._reads, ReadsHelper) and isinstance(service._writes, WritesHelper) if those helper classes exist) to ensure the CloudKit client helpers are wired during initialization.pyicloud/services/notes/service.py (2)
465-483: Duplicate lookup logic betweenexport_noteandrender_note.Both
export_note(lines 465-472) andrender_note(lines 498-505) contain identical logic for looking up and extracting the target record. Consider extracting this to a helper method to reduce duplication.♻️ Proposed refactor
def _lookup_note_record(self, note_id: str) -> CKRecord: """Lookup a note record by ID, raising NoteNotFound if absent.""" resp = self._raw.lookup([note_id]) for rec in resp.records: if isinstance(rec, CKRecord) and rec.recordName == note_id: return rec raise NoteNotFound(f"Note not found: {note_id}")Then use in both methods:
def export_note(self, note_id: str, output_dir: str, **config_kwargs) -> str: target = self._lookup_note_record(note_id) # ... rest of implementation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/service.py` around lines 465 - 483, Extract the duplicated lookup loop into a new helper method _lookup_note_record(self, note_id: str) that calls self._raw.lookup([note_id]), iterates resp.records, returns the CKRecord whose recordName equals note_id, and raises NoteNotFound(f"Note not found: {note_id}") if none found; then replace the inline lookup in export_note and render_note to call self._lookup_note_record(note_id) (preserve the lazy imports of NoteExporter/ExportConfig in export_note and any rendering imports in render_note).
393-408: Overly defensive logging with nested try/except.The try/except block wrapping the logging statements (lines 395-408) catches all exceptions during logging, which masks potential issues. If
note_body.textaccess fails, the outer exception handler would be more appropriate.♻️ Proposed fix
note_body = self._decode_note_body(target) - # Minimal breadcrumbs for body decode outcome - try: - if note_body and note_body.text: - LOGGER.info( - "notes.body.decoded ok id=%s len=%d", - summary.id, - len(note_body.text), - ) - else: - LOGGER.info("notes.body.decoded empty id=%s", summary.id) - except Exception: - LOGGER.info( - "notes.body.decoded %s", - "ok" if note_body and note_body.text else "empty", - ) + if note_body and note_body.text: + LOGGER.info( + "notes.body.decoded ok id=%s len=%d", + summary.id, + len(note_body.text), + ) + else: + LOGGER.info("notes.body.decoded empty id=%s", summary.id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/service.py` around lines 393 - 408, Remove the overly broad try/except around the logging and log deterministically using safe attribute access; replace the try block in the notes body decode section so it no longer swallows all exceptions and instead uses a guarded check like if note_body and getattr(note_body, "text", None): LOGGER.info("notes.body.decoded ok id=%s len=%d", summary.id, len(note_body.text)) else: LOGGER.info("notes.body.decoded empty id=%s", summary.id) — this avoids potential attribute errors while keeping logging explicit (reference symbols: note_body, note_body.text, summary.id, LOGGER.info).pyicloud/services/notes/domain.py (2)
15-18: Field namebytesshadows Python builtin.The field name
byteson line 16 shadows the Python builtin type. While functionally correct (Pydantic handles this fine), it may cause confusion when reading the code or in contexts where the builtinbytestype is needed within the class.Consider renaming to
contentorraw_bytesfor clarity.♻️ Proposed fix
class NoteBody(FrozenServiceModel): - bytes: bytes + content: bytes text: Optional[str] = None attachment_ids: List[AttachmentId] = Field(default_factory=list)Note: This would require updating the call site in
pyicloud/services/notes/decoding.py:90to usecontent=docinstead ofbytes=doc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/domain.py` around lines 15 - 18, Rename the NoteBody model field named bytes to a non-shadowing name (e.g., content or raw_bytes) to avoid shadowing the builtin bytes; update the class NoteBody declaration to replace bytes: bytes with content: bytes (or raw_bytes: bytes) and keep text: Optional[str] and attachment_ids: List[AttachmentId] unchanged, then update all call sites that construct NoteBody (notably the usage in pyicloud/services/notes/decoding.py where NoteBody(...) is created) to pass content=<doc> (or raw_bytes=<doc>) instead of bytes=<doc> so field names match.
18-18: Mutable list in frozen model allows mutation despite immutability intent.While
FrozenServiceModelwithfrozen=Trueprevents reassigningattachment_idsto a new list, it does not prevent mutating the list contents (e.g.,note_body.attachment_ids.append(...)). If strict immutability is required, consider using atupleinstead.♻️ Proposed fix for strict immutability
+from typing import Tuple + class NoteBody(FrozenServiceModel): bytes: bytes text: Optional[str] = None - attachment_ids: List[AttachmentId] = Field(default_factory=list) + attachment_ids: Tuple[AttachmentId, ...] = Field(default_factory=tuple)If you need to construct with a list, Pydantic will coerce it to a tuple.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/domain.py` at line 18, The model Field attachment_ids on FrozenServiceModel is declared as a mutable List[AttachmentId] which allows in-place mutation despite frozen=True; change the type to an immutable Tuple by replacing List[AttachmentId] with Tuple[AttachmentId, ...] (and adjust the default to an empty tuple/default_factory=tuple) so Pydantic will coerce incoming lists to tuples and enforce immutability; update any imports (typing.Tuple) and ensure any construction sites still accept lists (they will be coerced) and any code that mutates attachment_ids is adjusted accordingly.pyicloud/common/cloudkit/models.py (2)
102-119: Heuristic for seconds vs milliseconds may be fragile.The threshold
100_000_000_000(≈ 1973 in seconds, ≈ 5138 CE in milliseconds) works for typical dates but could misclassify edge-case timestamps. Consider documenting this limitation or adding a comment explaining the chosen boundary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/common/cloudkit/models.py` around lines 102 - 119, The _from_secs_or_millis function uses a magic numeric threshold (100_000_000_000) to distinguish seconds vs milliseconds which is fragile and undocumented; replace the literal with a named constant (e.g., SECONDS_MILLIS_BOUNDARY) and add a concise comment above it explaining the heuristic (values below boundary treated as seconds to cover dates up to ~5138 CE, otherwise milliseconds) so future maintainers understand the tradeoff and can adjust if needed; keep the existing logic in _from_secs_or_millis but reference the new constant instead of the raw number.
83-97:PlainSerializerreturn type mismatch for nullable variant.The
MillisDateTimeOrNoneserializer declaresreturn_type=intbut the lambda can returnNone. This may cause Pydantic schema generation or type-checker warnings.♻️ Proposed fix
MillisDateTimeOrNone = Annotated[ Optional[datetime], BeforeValidator(lambda v: None if v is None else _from_millis_or_none(v)), PlainSerializer( lambda v: None if v is None else _to_millis(v), - return_type=int, + return_type=Optional[int], when_used="json", ), WithJsonSchema( { "type": ["integer", "null"], "description": "milliseconds since Unix epoch or null sentinel", } ), ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/common/cloudkit/models.py` around lines 83 - 97, MillisDateTimeOrNone's PlainSerializer declares return_type=int while the serializer lambda can return None; update the serializer metadata to reflect a nullable integer return type (e.g., use an optional/union return type such as Optional[int] or Union[int, None] / "int | None" depending on project typing conventions) so return_type matches the lambda, keeping the serializer lambda and BeforeValidator unchanged; reference the PlainSerializer attached to MillisDateTimeOrNone to locate the change.pyicloud/services/reminders/_reads.py (1)
237-280: No handling for unrecognised record types in_ingest_compound_record.If an unknown record type arrives, the method silently returns without processing or logging. Consider adding a debug log for unrecognised types to aid troubleshooting.
♻️ Proposed fix
if record_type == "RecurrenceRule": recurrence_rule = self._mapper.record_to_recurrence_rule(rec) recurrence_rules[recurrence_rule.id] = recurrence_rule + return + + self._logger.debug( + "Unrecognised record type in compound query: %s (recordName=%s)", + record_type, + rec.recordName, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/reminders/_reads.py` around lines 237 - 280, The _ingest_compound_record function currently ignores unknown rec.recordType values; add a final else branch (after handling "RecurrenceRule") that logs a debug message including the unrecognised record_type and an identifier from rec (e.g., rec.recordName or rec.recordID) to help troubleshooting. Use the same logger instance used in this class (e.g., self.logger or self._logger) and include the record_type and rec.recordName in the message so developers can trace unexpected record types from self._mapper.record_to_* call sites.pyicloud/services/notes/protobuf/notes.proto (2)
38-47:STYLE_TYPE_TITLE = 0as enum zero value may cause semantic issues.In proto3, the zero value is the default. Using
STYLE_TYPE_TITLEas zero means unset fields will default to "title" style, which may not be the intended behaviour. Consider whether a sentinel value likeSTYLE_TYPE_UNSPECIFIED = 0would be more appropriate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/protobuf/notes.proto` around lines 38 - 47, The enum StyleType currently uses STYLE_TYPE_TITLE = 0 which makes the default/unspecified proto3 value map to "title"; change the zero value to a sentinel like STYLE_TYPE_UNSPECIFIED = 0 and renumber the other values so STYLE_TYPE_TITLE is a non-zero value (e.g., increment existing values as needed), then update any code paths that reference StyleType or expect the old numeric values (search for StyleType, STYLE_TYPE_TITLE, STYLE_TYPE_HEADING, etc.) to use the new enum entries or numeric values accordingly.
1-3: Package versioning and enum naming conventions.Buf static analysis flags several proto3 convention violations:
- Package name should be versioned (e.g.,
notes.v1)- Enum zero values should use
_UNSPECIFIEDsuffixThese are best-practice conventions for API evolution and protobuf hygiene. However, since this schema models Apple's existing Notes format rather than a public API, strict adherence may not be necessary.
If you decide to adopt Buf conventions for consistency:
-package notes; +package notes.v1;And for enums (example):
enum Highlight { - HIGHLIGHT_UNKNOWN = 0; + HIGHLIGHT_UNSPECIFIED = 0; HIGHLIGHT_PURPLE = 1;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/protobuf/notes.proto` around lines 1 - 3, Update the protobuf package declaration from notes to a versioned package like notes.v1 by changing the package statement in notes.proto, and rename all enum zero values to follow the _UNSPECIFIED suffix convention (e.g., change any EnumName { VALUE_UNKNOWN = 0; } to VALUE_UNSPECIFIED = 0) while keeping their numeric values unchanged; after making these changes update any import/fully-qualified references to notes package and regen the language-specific artifacts so code using the Notes-related enums and package (symbols in notes.proto such as the package declaration and enum type names) remains consistent.pyicloud/services/notes/rendering/exporter.py (2)
42-50: Consider logging exceptions in catch-all handlers.The broad
except Exceptionhere and in similar locations throughout the file silently swallows errors, making debugging difficult. Even if the intent is resilience, logging at DEBUG level would aid troubleshooting.💡 Example improvement
try: nb = BodyDecoder().decode(raw) if not nb or not getattr(nb, "bytes", None): return None msg = pb.NoteStoreProto() msg.ParseFromString(nb.bytes) return getattr(getattr(msg, "document", None), "note", None) - except Exception: + except Exception as exc: + LOGGER.debug("Failed to decode/parse note: %s", exc) return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/exporter.py` around lines 42 - 50, The try/except around BodyDecoder().decode(raw) silently swallows errors; update the handler to log the caught exception at DEBUG (instead of returning silently) so failures are visible during troubleshooting. Specifically, in the block that uses BodyDecoder, nb, msg.ParseFromString and returns msg.document.note, catch Exception as e and call the module logger (or logging.getLogger(__name__)) logger.debug(...) including the exception info (e.g., logger.debug("notes export parse error", exc_info=e) or logger.exception at DEBUG) before returning None. Ensure any added import (logging) and the logger variable are consistent with the file's logging pattern.
223-225: URL scheme check is case-sensitive.The check
url.startswith("http://") or url.startswith("https://")won't match uppercase schemes likeHTTP://orHTTPS://, which are technically valid per RFC 3986. This applies to similar checks at lines 308, 403, and 461.💡 Case-insensitive URL scheme check
- if not (url and (url.startswith("http://") or url.startswith("https://"))): + url_lower = (url or "").lower() + if not (url and (url_lower.startswith("http://") or url_lower.startswith("https://"))):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/exporter.py` around lines 223 - 225, The URL scheme check using url.startswith("http://") or url.startswith("https://") is case-sensitive and will miss schemes like "HTTP://"; update the checks to be case-insensitive by normalizing the scheme (e.g., use url.lower().startswith(...) or parse the URL with urllib.parse.urlparse and inspect the .scheme lowercased) wherever this pattern appears (the current `url` check around the thumbnail skip and the similar checks at the other occurrences near lines 308, 403, and 461) so that HTTP/HTTPS are recognized regardless of letter case.tests/test_notes_cli.py (1)
24-27: Hardcoded/tmppath is not portable.The path
/tmp/python-test-resultswon't work on Windows. Consider usingtempfile.gettempdir()for cross-platform compatibility.♻️ Use portable temp directory
+import tempfile + class TestNotesCli(unittest.TestCase): def _output_dir(self, name): - path = os.path.join("/tmp/python-test-results", "notes-cli", name) + path = os.path.join(tempfile.gettempdir(), "python-test-results", "notes-cli", name) os.makedirs(path, exist_ok=True) return path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes_cli.py` around lines 24 - 27, Replace the hardcoded "/tmp/python-test-results" in the _output_dir method with a cross-platform temp directory; use tempfile.gettempdir() (or tempfile.mkdtemp() if you need unique dirs) to build the base path, join it with "notes-cli" and the given name, ensure os.makedirs still uses exist_ok=True, and update _output_dir to reference tempfile so tests run on Windows and Unix.examples/notes_cli.py (3)
189-199: Move import to top of file.The
timemodule is imported insidemain()and again at line 399. Moving it to the top-level imports would be cleaner and avoid the duplicate import.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notes_cli.py` around lines 189 - 199, The file currently imports time inside main() and again later, causing duplicate imports; move the "import time" statement to the module-level imports at the top of the file, remove the inner imports (the one inside main() and the later duplicate import), and ensure the module-level import supports the phase function and t0 variable (phase, t0) which rely on time.perf_counter(); this keeps a single top-level import and avoids repeated imports.
239-244: Duplicated_safe_namefunction.This function duplicates the logic in
pyicloud/services/notes/rendering/exporter.py:_safe_name. Consider importing and reusing the existing function to avoid drift between implementations.♻️ Reuse existing implementation
from pyicloud.services.notes.rendering.exporter import decode_and_parse_note +from pyicloud.services.notes.rendering.exporter import _safe_name from pyicloud.services.notes.rendering.options import ExportConfig # ... later in main() ... - def _safe_name(s: Optional[str]) -> str: - if not s: - return "untitled" - s = re.sub(r"\s+", " ", s).strip() - s = re.sub(r"[^\w\- ]+", "-", s) - return s[:60] or "untitled"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notes_cli.py` around lines 239 - 244, The _safe_name implementation is duplicated; remove the local _safe_name in examples/notes_cli.py and import and reuse the canonical implementation from pyicloud.services.notes.rendering.exporter (e.g., import _safe_name from that module), then update any local references to call that imported function; ensure the import is added to the module imports and that behavior remains the same for None/empty input and length trimming.
158-161: Bare except clause may mask errors.Using
except Exceptionhere catches all exceptions, making it hard to distinguish between invalid input and actual errors. Consider catchingValueErrorspecifically.💡 More specific exception handling
try: idx = int(sel) if sel else 0 - except Exception: + except ValueError: idx = 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/notes_cli.py` around lines 158 - 161, Replace the broad exception handler around converting sel to an integer with a specific catch for ValueError: in the try/except that assigns idx = int(sel) if sel else 0, change the except Exception to except ValueError so only parsing errors are handled and other exceptions bubble up; keep the fallback idx = 0 behavior intact.tests/test_notes_rendering.py (1)
121-123: Remove debug print statements from test.Debug print statements in tests add noise to test output. Consider removing them or using the test framework's logging capabilities if the output is needed for debugging.
🧹 Remove print statements
- print("\n--- Rendered HTML Preview (First 500 chars) ---") - print(html[:500]) - print("-----------------------------------------------")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_notes_rendering.py` around lines 121 - 123, Remove the three debug print statements that output the "Rendered HTML Preview" and separator in tests/test_notes_rendering.py; either delete the lines printing "\n--- Rendered HTML Preview (First 500 chars) ---", print(html[:500]), and the separator, or replace them with proper test logging using the test framework (e.g., pytest logging/capsys) so tests do not produce noisy stdout; ensure no remaining print(...) calls remain in the test function that renders HTML.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@example_reminders.py`:
- Around line 328-342: Replace the single immediate fetch in assert_round_trip
with the existing polling helper: call wait_until (using the script's
consistency timeout constant) to repeatedly invoke
reminders_api.get(reminder_id) until the returned Reminder's fields (title,
description, completed, due_date, priority, flagged, all_day, time_zone,
parent_reminder_id) match the provided expected_* values (skip expectations that
are None), then return the matched Reminder; ensure the wait_until
timeout/interval are used so tests tolerate eventual consistency.
- Around line 169-176: The current 2FA handling assumes a numeric code; update
the branch that checks api.requires_2fa to also handle security-key flows by
checking api.security_key_names (or similar property) and, if present, invoke
api.confirm_security_key(name) (prompt user to select a name if multiple)
instead of calling api.validate_2fa_code; otherwise keep the existing
validate_2fa_code path. After successful confirmation or code validation,
continue to check api.is_trusted_session and call api.trust_session() as before,
and ensure errors from confirm_security_key/validate_2fa_code are surfaced
consistently (raise or log) so authentication failures remain obvious.
In `@pyicloud/services/notes/rendering/attachments.py`:
- Around line 170-173: The current code uses _safe_url(ctx.primary_url or
ctx.thumb_url, ...) which validates only the first truthy value and lets an
invalid primary_url hide a valid thumb_url; change to validate each URL
independently by first calling _safe_url(ctx.primary_url,
allowed_schemes={"http","https"}) and if that returns falsy/None then call
_safe_url(ctx.thumb_url, allowed_schemes={"http","https"}) so url is set to the
first actually valid URL (referencing ctx.primary_url, ctx.thumb_url and
_safe_url in attachments.py).
In `@pyicloud/services/notes/rendering/table_builder.py`:
- Around line 126-155: The parse_cell_columns method currently lets
ParseFromString, entries[...] lookups, and render_note_cb(cell_note) raise and
abort rendering; modify parse_cell_columns to catch exceptions around each risky
operation (parsing, index lookups, and calling render_note_cb) and on any
exception either continue (skip that cell/column/row) or assign None/leave cell
as None so the renderer degrades gracefully; specifically wrap access to
self.entries[col.key.object_index], self.entries[col.value.object_index],
self.entries[row.key.object_index], self.entries[row.value.object_index], and
the call to self.render_note_cb(cell_note) with try/except and avoid re-raising,
and mirror the same defensive pattern used in the later block (lines ~176-245)
so both parse_cell_columns and the corresponding row-parsing helper fail closed
to None instead of propagating errors (use _uuid_index_from_entry and
render_note_cb names to locate spots).
In `@pyicloud/services/reminders/_protocol.py`:
- Around line 83-134: The function _decode_crdt_document currently swallows all
terminal decode errors and returns an empty string, which causes malformed CRDT
payloads to silently become blank fields; change its behavior to raise a clear
exception instead of returning "" on terminal failures (the base64 decode except
block and the final return ""), e.g. raise a ValueError or custom DecodeError
with contextual info after logging, so callers (like the reminder hydration in
_mappers.py) can surface or handle decode errors appropriately; update all
places in _decode_crdt_document where "" is returned on failure to raise the
exception and ensure any callers that expect a string handle or propagate that
exception.
In `@pyicloud/services/reminders/_writes.py`:
- Around line 376-400: The update() flow currently omits CloudKit keys when
reminder.due_date, reminder.time_zone, or reminder.parent_reminder_id are
cleared locally, leaving server values unchanged; change the logic around fields
and fields_mod so that when those attributes are explicitly None/cleared you
still add the corresponding entry to fields and fields_mod to instruct a removal
— e.g. for reminder.due_date is None set fields["DueDate"] =
{"type":"TIMESTAMP","value":None} and append "dueDate" to fields_mod; for
reminder.time_zone is None set fields["TimeZone"] =
{"type":"STRING","value":None} and append "timeZone"; for
reminder.parent_reminder_id is None set fields["ParentReminder"] =
{"type":"REFERENCE","value":None} and append "parentReminder" — ensure this
logic lives next to the existing branches that handle non-None values so cleared
fields are explicitly sent in update().
In `@README.md`:
- Around line 946-956: The example uses sync_cursor() and then immediately calls
iter_changes(since=cursor), which shows a no-op; change it to demonstrate
persisting and reusing a previously saved cursor: call reminders.sync_cursor()
at an earlier point (or load a persisted cursor value), persist that cursor
(e.g., save to storage), and then show iter_changes(since=loaded_cursor) running
in a later invocation; reference the functions sync_cursor(),
iter_changes(since=...), and the ReminderChangeEvent type so readers know to
save and later reload the cursor before calling iter_changes.
---
Nitpick comments:
In `@buf.yaml`:
- Around line 6-10: Add a one-line rationale comment for each ignored proto
entry under ignore_only:PACKAGE_DIRECTORY_MATCH (e.g., for
pyicloud/services/notes/protobuf/notes.proto,
pyicloud/services/reminders/protobuf/reminders.proto, and
pyicloud/services/reminders/protobuf/versioned_document.proto) explaining why
the wire-format package name must be preserved; place the brief comment directly
above or beside each filename so future reviewers see the intentional exception.
In `@examples/notes_cli.py`:
- Around line 189-199: The file currently imports time inside main() and again
later, causing duplicate imports; move the "import time" statement to the
module-level imports at the top of the file, remove the inner imports (the one
inside main() and the later duplicate import), and ensure the module-level
import supports the phase function and t0 variable (phase, t0) which rely on
time.perf_counter(); this keeps a single top-level import and avoids repeated
imports.
- Around line 239-244: The _safe_name implementation is duplicated; remove the
local _safe_name in examples/notes_cli.py and import and reuse the canonical
implementation from pyicloud.services.notes.rendering.exporter (e.g., import
_safe_name from that module), then update any local references to call that
imported function; ensure the import is added to the module imports and that
behavior remains the same for None/empty input and length trimming.
- Around line 158-161: Replace the broad exception handler around converting sel
to an integer with a specific catch for ValueError: in the try/except that
assigns idx = int(sel) if sel else 0, change the except Exception to except
ValueError so only parsing errors are handled and other exceptions bubble up;
keep the fallback idx = 0 behavior intact.
In `@pyicloud/common/cloudkit/models.py`:
- Around line 102-119: The _from_secs_or_millis function uses a magic numeric
threshold (100_000_000_000) to distinguish seconds vs milliseconds which is
fragile and undocumented; replace the literal with a named constant (e.g.,
SECONDS_MILLIS_BOUNDARY) and add a concise comment above it explaining the
heuristic (values below boundary treated as seconds to cover dates up to ~5138
CE, otherwise milliseconds) so future maintainers understand the tradeoff and
can adjust if needed; keep the existing logic in _from_secs_or_millis but
reference the new constant instead of the raw number.
- Around line 83-97: MillisDateTimeOrNone's PlainSerializer declares
return_type=int while the serializer lambda can return None; update the
serializer metadata to reflect a nullable integer return type (e.g., use an
optional/union return type such as Optional[int] or Union[int, None] / "int |
None" depending on project typing conventions) so return_type matches the
lambda, keeping the serializer lambda and BeforeValidator unchanged; reference
the PlainSerializer attached to MillisDateTimeOrNone to locate the change.
In `@pyicloud/services/notes/client.py`:
- Around line 429-438: The code passes a plain dict into
CKZoneChangesZoneReq.zoneID and silences the type error with "# type:
ignore[dict-item]"; replace that dict with a proper CKZoneID instance (import
CKZoneID) and construct it using zone_name (e.g., CKZoneID(zoneName=zone_name,
zoneType="REGULAR_CUSTOM_ZONE")) when building the
CKZoneChangesRequest/CKZoneChangesZoneReq so type checking and runtime behavior
are correct.
- Around line 420-425: Remove the redundant "pass" inside the except block that
logs failures to get the sync token (the block starting with "except Exception
as e:" which calls LOGGER.warning("Failed to get sync token via query, falling
back. Error: %s", e)); simply let the except block finish after the
LOGGER.warning call without the trailing pass so no-op is not duplicated.
In `@pyicloud/services/notes/domain.py`:
- Around line 15-18: Rename the NoteBody model field named bytes to a
non-shadowing name (e.g., content or raw_bytes) to avoid shadowing the builtin
bytes; update the class NoteBody declaration to replace bytes: bytes with
content: bytes (or raw_bytes: bytes) and keep text: Optional[str] and
attachment_ids: List[AttachmentId] unchanged, then update all call sites that
construct NoteBody (notably the usage in pyicloud/services/notes/decoding.py
where NoteBody(...) is created) to pass content=<doc> (or raw_bytes=<doc>)
instead of bytes=<doc> so field names match.
- Line 18: The model Field attachment_ids on FrozenServiceModel is declared as a
mutable List[AttachmentId] which allows in-place mutation despite frozen=True;
change the type to an immutable Tuple by replacing List[AttachmentId] with
Tuple[AttachmentId, ...] (and adjust the default to an empty
tuple/default_factory=tuple) so Pydantic will coerce incoming lists to tuples
and enforce immutability; update any imports (typing.Tuple) and ensure any
construction sites still accept lists (they will be coerced) and any code that
mutates attachment_ids is adjusted accordingly.
In `@pyicloud/services/notes/protobuf/notes.proto`:
- Around line 38-47: The enum StyleType currently uses STYLE_TYPE_TITLE = 0
which makes the default/unspecified proto3 value map to "title"; change the zero
value to a sentinel like STYLE_TYPE_UNSPECIFIED = 0 and renumber the other
values so STYLE_TYPE_TITLE is a non-zero value (e.g., increment existing values
as needed), then update any code paths that reference StyleType or expect the
old numeric values (search for StyleType, STYLE_TYPE_TITLE, STYLE_TYPE_HEADING,
etc.) to use the new enum entries or numeric values accordingly.
- Around line 1-3: Update the protobuf package declaration from notes to a
versioned package like notes.v1 by changing the package statement in
notes.proto, and rename all enum zero values to follow the _UNSPECIFIED suffix
convention (e.g., change any EnumName { VALUE_UNKNOWN = 0; } to
VALUE_UNSPECIFIED = 0) while keeping their numeric values unchanged; after
making these changes update any import/fully-qualified references to notes
package and regen the language-specific artifacts so code using the
Notes-related enums and package (symbols in notes.proto such as the package
declaration and enum type names) remains consistent.
In `@pyicloud/services/notes/rendering/exporter.py`:
- Around line 42-50: The try/except around BodyDecoder().decode(raw) silently
swallows errors; update the handler to log the caught exception at DEBUG
(instead of returning silently) so failures are visible during troubleshooting.
Specifically, in the block that uses BodyDecoder, nb, msg.ParseFromString and
returns msg.document.note, catch Exception as e and call the module logger (or
logging.getLogger(__name__)) logger.debug(...) including the exception info
(e.g., logger.debug("notes export parse error", exc_info=e) or logger.exception
at DEBUG) before returning None. Ensure any added import (logging) and the
logger variable are consistent with the file's logging pattern.
- Around line 223-225: The URL scheme check using url.startswith("http://") or
url.startswith("https://") is case-sensitive and will miss schemes like
"HTTP://"; update the checks to be case-insensitive by normalizing the scheme
(e.g., use url.lower().startswith(...) or parse the URL with
urllib.parse.urlparse and inspect the .scheme lowercased) wherever this pattern
appears (the current `url` check around the thumbnail skip and the similar
checks at the other occurrences near lines 308, 403, and 461) so that HTTP/HTTPS
are recognized regardless of letter case.
In `@pyicloud/services/notes/service.py`:
- Around line 465-483: Extract the duplicated lookup loop into a new helper
method _lookup_note_record(self, note_id: str) that calls
self._raw.lookup([note_id]), iterates resp.records, returns the CKRecord whose
recordName equals note_id, and raises NoteNotFound(f"Note not found: {note_id}")
if none found; then replace the inline lookup in export_note and render_note to
call self._lookup_note_record(note_id) (preserve the lazy imports of
NoteExporter/ExportConfig in export_note and any rendering imports in
render_note).
- Around line 393-408: Remove the overly broad try/except around the logging and
log deterministically using safe attribute access; replace the try block in the
notes body decode section so it no longer swallows all exceptions and instead
uses a guarded check like if note_body and getattr(note_body, "text", None):
LOGGER.info("notes.body.decoded ok id=%s len=%d", summary.id,
len(note_body.text)) else: LOGGER.info("notes.body.decoded empty id=%s",
summary.id) — this avoids potential attribute errors while keeping logging
explicit (reference symbols: note_body, note_body.text, summary.id,
LOGGER.info).
In `@pyicloud/services/reminders/_reads.py`:
- Around line 237-280: The _ingest_compound_record function currently ignores
unknown rec.recordType values; add a final else branch (after handling
"RecurrenceRule") that logs a debug message including the unrecognised
record_type and an identifier from rec (e.g., rec.recordName or rec.recordID) to
help troubleshooting. Use the same logger instance used in this class (e.g.,
self.logger or self._logger) and include the record_type and rec.recordName in
the message so developers can trace unexpected record types from
self._mapper.record_to_* call sites.
In `@tests/services/test_reminders.py`:
- Around line 13-22: The test_reminders_service_init currently only asserts
callability of facade methods; update it to verify the internal wiring by
asserting that the RemindersService instance has its _reads and _writes
attributes properly instantiated (e.g., assert service._reads is not None and
assert service._writes is not None, or use isinstance(service._reads,
ReadsHelper) and isinstance(service._writes, WritesHelper) if those helper
classes exist) to ensure the CloudKit client helpers are wired during
initialization.
In `@tests/test_notes_cli.py`:
- Around line 24-27: Replace the hardcoded "/tmp/python-test-results" in the
_output_dir method with a cross-platform temp directory; use
tempfile.gettempdir() (or tempfile.mkdtemp() if you need unique dirs) to build
the base path, join it with "notes-cli" and the given name, ensure os.makedirs
still uses exist_ok=True, and update _output_dir to reference tempfile so tests
run on Windows and Unix.
In `@tests/test_notes_rendering.py`:
- Around line 121-123: Remove the three debug print statements that output the
"Rendered HTML Preview" and separator in tests/test_notes_rendering.py; either
delete the lines printing "\n--- Rendered HTML Preview (First 500 chars) ---",
print(html[:500]), and the separator, or replace them with proper test logging
using the test framework (e.g., pytest logging/capsys) so tests do not produce
noisy stdout; ensure no remaining print(...) calls remain in the test function
that renders HTML.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 25ff7654-35cb-480f-b1a6-c48f46c87235
📒 Files selected for processing (26)
README.mdbuf.yamlexample_reminders.pyexamples/notes_cli.pypyicloud/base.pypyicloud/common/cloudkit/models.pypyicloud/services/notes/client.pypyicloud/services/notes/domain.pypyicloud/services/notes/protobuf/notes.protopyicloud/services/notes/protobuf/notes_pb2.pypyicloud/services/notes/rendering/attachments.pypyicloud/services/notes/rendering/exporter.pypyicloud/services/notes/rendering/options.pypyicloud/services/notes/rendering/renderer.pypyicloud/services/notes/rendering/table_builder.pypyicloud/services/notes/service.pypyicloud/services/reminders/__init__.pypyicloud/services/reminders/_protocol.pypyicloud/services/reminders/_reads.pypyicloud/services/reminders/_writes.pytests/services/test_reminders.pytests/services/test_reminders_cloudkit.pytests/test_base.pytests/test_notes.pytests/test_notes_cli.pytests/test_notes_rendering.py
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/services/test_reminders_cloudkit.py
- pyicloud/services/notes/rendering/options.py
- pyicloud/services/notes/rendering/renderer.py
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
example_reminders.py (1)
169-209:⚠️ Potential issue | 🟠 MajorPass the selected device to the authentication methods.
The security-key branch asks the user to choose a key by name, but
confirm_security_key()is called without arguments—it never receives the selection. The method signature accepts an optionaldeviceparameter (pyicloud/base.py:593) and the documented patterns in examples.py and README.md show the correct usage:api.confirm_security_key(selected_device).The 2SA branch similarly hard-codes
devices[0]instead of prompting the user to choose. On accounts with multiple keys or devices, example_reminders.py will always target the first device rather than the user's selection.Update the code to match the documented flow: retrieve
api.fido2_devicesfor security keys (not just names), prompt the user to select, and pass the selected device object toconfirm_security_key(). Similarly, prompt for device selection in the 2SA branch instead of hard-coding index 0.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example_reminders.py` around lines 169 - 209, The security-key and 2SA flows currently only use names or the first device; update both to present actual device objects for selection and pass the chosen device into the auth calls: for 2FA, use api.fido2_devices (not api.security_key_names), show their friendly names for the user, validate the chosen index, set selected_device = fido2_devices[selected_index] and call api.confirm_security_key(selected_device); for 2SA, present api.trusted_devices to the user instead of hard-coding devices[0], validate the chosen index, set selected_device = trusted_devices[selected_index] and use that object in api.send_verification_code(selected_device) and api.validate_verification_code(selected_device, code). Ensure index bounds checks and same session-trust logic remain unchanged.
🧹 Nitpick comments (3)
pyicloud/services/notes/rendering/table_builder.py (1)
121-124: Add hard caps before allocating the table matrix.
rows.total * cols.totalis payload-derived; without bounds, malformed input can trigger very large allocations.Resilience guard example
+MAX_TABLE_ROWS = 500 +MAX_TABLE_COLS = 200 + def init_table_buffers(self) -> None: + if self.rows.total > MAX_TABLE_ROWS or self.cols.total > MAX_TABLE_COLS: + self.cells = [] + return self.cells = [ [Cell() for _ in range(self.cols.total)] for _ in range(self.rows.total) ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/table_builder.py` around lines 121 - 124, init_table_buffers currently allocates self.cells using rows.total and cols.total directly, which can lead to huge allocations from malicious payloads; add hard caps (e.g., MAX_ROWS, MAX_COLS, and/or MAX_CELLS) and validate self.rows.total and self.cols.total against those caps inside init_table_buffers (or a helper) before constructing the matrix, and if limits are exceeded raise a clear exception (ValueError) or clamp to safe limits; reference the init_table_buffers method, self.rows.total, self.cols.total, self.cells and the Cell class when making the change so the allocation is only performed after the size checks.pyicloud/services/notes/rendering/attachments.py (2)
97-109: Consider extracting shared anchor-attribute assembly into one helper.The same link-attribute and fallback assembly is repeated in several renderers; centralising it will reduce drift and make policy changes safer.
Refactor sketch
+def _link_attrs(ctx: AttachmentContext) -> dict[str, str]: + attrs: dict[str, str] = {} + if ctx.link_rel: + attrs["rel"] = ctx.link_rel + if ctx.link_referrerpolicy: + attrs["referrerpolicy"] = ctx.link_referrerpolicy + if ctx.link_target: + attrs["target"] = ctx.link_target + return attrsThen merge
**_link_attrs(ctx)where link attrs are currently rebuilt manually.Also applies to: 120-130, 141-165, 245-271, 281-298
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/attachments.py` around lines 97 - 109, Extract a shared helper function _link_attrs(ctx: AttachmentContext) -> dict that builds and returns the anchor attribute dict (starting with {"class": "attachment link"}, computing href via _safe_url(ctx.primary_url, allowed_schemes={"http","https"}), and adding rel/referrerpolicy/target when ctx.link_rel, ctx.link_referrerpolicy, ctx.link_target are present); then replace the manual assembly in AttachmentRenderer.render (the render method shown) and the other renderers by merging its result into ctx.base_attrs(e.g. return h("a", **ctx.base_attrs(_link_attrs(ctx)))(label).render()); keep the existing label fallback (ctx.title or ctx.uti or "attachment") and usage of _safe_url unchanged.
339-340: Remove duplicated_AUDIOsingleton initialisation.
_AUDIOis assigned twice in sequence. It is harmless at runtime, but it adds noise and can hide missed mappings during future edits.Proposed tidy-up
_IMAGE = _ImageRenderer() _AUDIO = _AudioRenderer() -_AUDIO = _AudioRenderer() _VIDEO = _VideoRenderer()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyicloud/services/notes/rendering/attachments.py` around lines 339 - 340, Remove the duplicate singleton initialisation of _AUDIO: there are two consecutive assignments of _AUDIO = _AudioRenderer() — keep a single assignment and delete the redundant one so _AUDIO is only initialised once; verify no other duplicated mappings exist nearby (references: _AUDIO and _AudioRenderer).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@example_reminders.py`:
- Around line 718-761: The tests call reminders_api.add_location_trigger(...)
then immediately call reminders_api.get(...) and assert on linked-record fields
(e.g., location_arrive_fresh.alarm_ids, reminders_api.alarms_for(...)) which can
race with CloudKit; wrap those post-write lookups in the existing wait_until
polling helper and poll until the expected linked-record state is visible before
asserting (e.g., poll reminders_api.get(location_arrive.id) until alarm_ids
contains the new alarm id and poll
reminders_api.alarms_for(location_arrive_fresh) until rows include the new
alarm); apply the same wait_until-based polling pattern to the other affected
blocks (around add_location_trigger, linked hashtag/attachment/recurrence
writes, and delete checks referenced in the comment ranges).
In `@pyicloud/common/cloudkit/models.py`:
- Around line 41-66: The validators _from_millis_or_none and
_from_secs_or_millis currently raise TypeError for unsupported input types;
change those raises to ValueError so Pydantic v2 will wrap them into
ValidationError. Locate the raise TypeError(...) in function
_from_millis_or_none and the corresponding raise in _from_secs_or_millis and
replace each with raise ValueError(...) preserving the same error message text
and behavior for non-supported types.
In `@pyicloud/services/notes/rendering/table_builder.py`:
- Around line 233-263: The unconditional break after processing a root candidate
stops scanning other candidates; remove that break and instead only return the
rendered table once a candidate actually produced a renderable table.
Specifically, when iterating candidates and using tb.parse_rows, tb.parse_cols,
tb.init_table_buffers and (optionally) tb.parse_cell_columns on
pending_cell_columns, ensure you only call and return tb.render_html_table()
after verifying tb.rows.total>0 and tb.cols.total>0 and that parse_cell_columns
(if run) succeeded; if a candidate is incomplete or parse_cell_columns raises,
continue to the next candidate rather than breaking. Ensure the final fallback
after the loop still returns tb.render_html_table() (or None) if nothing
produced a renderable table.
In `@pyicloud/services/reminders/_writes.py`:
- Around line 95-104: The write paths build CloudKit record names and REFERENCE
fields directly from caller-provided IDs (e.g. passing reminder.id or
reminder_id into record_name in _write_record and into reference field values),
which breaks when callers supply shorthand IDs; run IDs through the existing
_as_record_name(id, "Reminder") normaliser before constructing record_name and
any REFERENCE field values (e.g. where record_type == "Reminder" and fields
contain REFERENCE entries or ResolutionTokenMap uses IDs) so all places that
call _write_record or build reminder references (including the occurrences
around the shown record_name usage and the additional sites listed) use
canonical "Reminder/..." names consistently.
- Around line 445-587: The code performs the CloudKit modify() before validating
child-model fields, which can cause a remote write to succeed and then raise a
local ValidationError; fix by validating inputs before calling
self._get_raw().modify(): in add_location_trigger construct or validate a
LocationTrigger (or call its validation method) using title, address, latitude,
longitude, radius and location_uid and raise before building
reminder_op/alarm_op/trigger_op; apply the same pattern to the image- and
recurrence-related methods by validating ImageAttachment and RecurrenceRule
instances (or invoking their validators) prior to creating CKModifyOperation
objects and invoking modify(), so any invalid radius/image/recurrence is caught
locally and prevents the remote request.
In `@README.md`:
- Around line 961-963: Update the README sentence about iter_changes to
distinguish soft deletes from tombstones: clarify that iter_changes(since=...)
yields ReminderChangeEvent objects where deleted events (type="deleted") may
still include a populated event.reminder for soft-deleted records, whereas only
true tombstones guarantee reminder == None; reference iter_changes,
ReminderChangeEvent and event.reminder in the text so callers understand to
inspect event.reminder on deleted events.
---
Duplicate comments:
In `@example_reminders.py`:
- Around line 169-209: The security-key and 2SA flows currently only use names
or the first device; update both to present actual device objects for selection
and pass the chosen device into the auth calls: for 2FA, use api.fido2_devices
(not api.security_key_names), show their friendly names for the user, validate
the chosen index, set selected_device = fido2_devices[selected_index] and call
api.confirm_security_key(selected_device); for 2SA, present api.trusted_devices
to the user instead of hard-coding devices[0], validate the chosen index, set
selected_device = trusted_devices[selected_index] and use that object in
api.send_verification_code(selected_device) and
api.validate_verification_code(selected_device, code). Ensure index bounds
checks and same session-trust logic remain unchanged.
---
Nitpick comments:
In `@pyicloud/services/notes/rendering/attachments.py`:
- Around line 97-109: Extract a shared helper function _link_attrs(ctx:
AttachmentContext) -> dict that builds and returns the anchor attribute dict
(starting with {"class": "attachment link"}, computing href via
_safe_url(ctx.primary_url, allowed_schemes={"http","https"}), and adding
rel/referrerpolicy/target when ctx.link_rel, ctx.link_referrerpolicy,
ctx.link_target are present); then replace the manual assembly in
AttachmentRenderer.render (the render method shown) and the other renderers by
merging its result into ctx.base_attrs(e.g. return h("a",
**ctx.base_attrs(_link_attrs(ctx)))(label).render()); keep the existing label
fallback (ctx.title or ctx.uti or "attachment") and usage of _safe_url
unchanged.
- Around line 339-340: Remove the duplicate singleton initialisation of _AUDIO:
there are two consecutive assignments of _AUDIO = _AudioRenderer() — keep a
single assignment and delete the redundant one so _AUDIO is only initialised
once; verify no other duplicated mappings exist nearby (references: _AUDIO and
_AudioRenderer).
In `@pyicloud/services/notes/rendering/table_builder.py`:
- Around line 121-124: init_table_buffers currently allocates self.cells using
rows.total and cols.total directly, which can lead to huge allocations from
malicious payloads; add hard caps (e.g., MAX_ROWS, MAX_COLS, and/or MAX_CELLS)
and validate self.rows.total and self.cols.total against those caps inside
init_table_buffers (or a helper) before constructing the matrix, and if limits
are exceeded raise a clear exception (ValueError) or clamp to safe limits;
reference the init_table_buffers method, self.rows.total, self.cols.total,
self.cells and the Cell class when making the change so the allocation is only
performed after the size checks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 824ee526-9ff3-4e89-a06f-b7f0cf72c763
📒 Files selected for processing (10)
README.mdbuf.yamlexample_reminders.pypyicloud/common/cloudkit/models.pypyicloud/services/notes/rendering/attachments.pypyicloud/services/notes/rendering/table_builder.pypyicloud/services/reminders/_protocol.pypyicloud/services/reminders/_writes.pytests/services/test_reminders_cloudkit.pytests/test_notes_rendering.py
🚧 Files skipped from review as they are similar to previous changes (2)
- buf.yaml
- tests/test_notes_rendering.py
example_reminders.py
Outdated
|
|
||
| print("Trusted devices:") | ||
| for index, device in enumerate(devices): | ||
| print(f" {index}: {_trusted_device_label(device)}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
README.md (1)
946-952:⚠️ Potential issue | 🟡 MinorLoad the delta cursor from storage, not from the line above.
loaded_cursor = cursorstill makes this read like a same-run delta query, which will usually yield nothing. Split this into explicit “run 1 / run 2” steps or showloaded_cursorcoming from durable storage.Suggested doc tweak
-# Earlier run: capture and persist a cursor somewhere durable. -cursor = reminders.sync_cursor() -# save cursor to disk / database here - -# Later run: reload the previously saved cursor before asking for deltas. -loaded_cursor = cursor +# Run 1: capture and persist a cursor somewhere durable. +cursor = reminders.sync_cursor() +# save_cursor(cursor) + +# Run 2: reload the previously saved cursor before asking for deltas. +loaded_cursor = load_cursor() +if loaded_cursor is None: + raise RuntimeError("No saved cursor available yet") for event in reminders.iter_changes(since=loaded_cursor): print(event.type, event.reminder_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 946 - 952, The README example incorrectly reuses in-memory cursor (`loaded_cursor = cursor`) making it a same-run query; instead demonstrate persisting the cursor returned by reminders.sync_cursor() and reloading it in a later run (e.g., show saving the value to durable storage and then assigning loaded_cursor from that storage) before calling reminders.iter_changes(since=loaded_cursor); split into “Run 1” (call reminders.sync_cursor() and persist the cursor) and “Run 2” (load the persisted cursor into loaded_cursor and pass it to reminders.iter_changes) and update the snippet to clearly show those two separate steps and the storage interaction.example_reminders.py (1)
1192-1267:⚠️ Potential issue | 🟠 MajorPoll
list_reminders()before asserting on the compound result.This block snapshots
list_reminders()once immediately after several live writes, then asserts on reminder, alarm, trigger, hashtag, attachment, and recurrence visibility. CloudKit can lag here even after the helper-specific lookups have converged, so the validator can still fail spuriously on a healthy account. Reusewait_until()around the compound query and assert on the stabilised result.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example_reminders.py` around lines 1192 - 1267, The compound query (calls to reminders_api.list_reminders producing compound_all/compound_open) is taken once and asserted against immediately; instead poll until the result stabilises using the existing wait_until helper: repeatedly call reminders_api.list_reminders(target_list.id, include_completed=True, results_limit=args.results_limit) until the returned structure contains the expected keys and expected_created_ids (and contains arrive_alarm.id, leave_alarm.id, arrive_trigger.id, leave_trigger.id, hashtag_created.id, attachment_created.id, recurrence_created.id) or a timeout occurs, then run the tracker.expect assertions against that stabilised compound_all; ensure you also check include_completed=False similarly if you rely on compound_open for any comparisons.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyicloud/services/reminders/_writes.py`:
- Around line 359-395: The write path currently sets "Completed" in
record_fields but never updates "CompletionDate", so update the create() and
update() routines to write "CompletionDate" alongside "Completed": when
completed is true set "CompletionDate" to the same millisecond timestamp (now_ms
or provided completed_date), and when completed is false clear it (e.g., null/0
or omit the field per existing API expectations) so round-trip works with
record_to_reminder(); modify the record_fields dictionary in the
create()/update() code paths (the block building record_fields and the analogous
block later) to include a "CompletionDate" entry consistent with other timestamp
fields.
---
Duplicate comments:
In `@example_reminders.py`:
- Around line 1192-1267: The compound query (calls to
reminders_api.list_reminders producing compound_all/compound_open) is taken once
and asserted against immediately; instead poll until the result stabilises using
the existing wait_until helper: repeatedly call
reminders_api.list_reminders(target_list.id, include_completed=True,
results_limit=args.results_limit) until the returned structure contains the
expected keys and expected_created_ids (and contains arrive_alarm.id,
leave_alarm.id, arrive_trigger.id, leave_trigger.id, hashtag_created.id,
attachment_created.id, recurrence_created.id) or a timeout occurs, then run the
tracker.expect assertions against that stabilised compound_all; ensure you also
check include_completed=False similarly if you rely on compound_open for any
comparisons.
In `@README.md`:
- Around line 946-952: The README example incorrectly reuses in-memory cursor
(`loaded_cursor = cursor`) making it a same-run query; instead demonstrate
persisting the cursor returned by reminders.sync_cursor() and reloading it in a
later run (e.g., show saving the value to durable storage and then assigning
loaded_cursor from that storage) before calling
reminders.iter_changes(since=loaded_cursor); split into “Run 1” (call
reminders.sync_cursor() and persist the cursor) and “Run 2” (load the persisted
cursor into loaded_cursor and pass it to reminders.iter_changes) and update the
snippet to clearly show those two separate steps and the storage interaction.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9c1ffc91-5807-425f-971e-974eb8bc04bc
📒 Files selected for processing (9)
README.mdexample_reminders.pypyicloud/common/cloudkit/models.pypyicloud/services/notes/rendering/attachments.pypyicloud/services/notes/rendering/table_builder.pypyicloud/services/reminders/_writes.pytests/services/test_reminders_cloudkit.pytests/test_notes.pytests/test_notes_rendering.py
🚧 Files skipped from review as they are similar to previous changes (3)
- pyicloud/services/notes/rendering/attachments.py
- tests/test_notes.py
- pyicloud/common/cloudkit/models.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/services/test_reminders_cloudkit.py (1)
1375-1451: Add a regression test for mismatched parent/child deletes.Please add a negative test ensuring
delete_hashtag(),delete_attachment(), anddelete_recurrence_rule()reject a child whosereminder_iddoes not match the providedreminder.id. This will lock in the linkage-integrity guard.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/services/test_reminders_cloudkit.py` around lines 1375 - 1451, Add negative tests that call delete_hashtag, delete_attachment, and delete_recurrence_rule with a child object whose reminder_id does not match the provided reminder.id and assert the service rejects it (e.g., raises ValueError); for each test create a RemindersService instance (mock svc._raw), construct a Reminder with id "Reminder/GOOD" and a child object (Hashtag/Attachment/RecurrenceRule) whose reminder_id is "Reminder/BAD", call the appropriate delete_* method and assert it raises the expected exception and that svc._raw.modify was not invoked (reset/inspect svc._raw.modify.call_count or assert not called) to ensure linkage-integrity is enforced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyicloud/services/reminders/_writes.py`:
- Around line 324-348: The code can delete a child record linked to the wrong
reminder because it never verifies the child's current parent before issuing the
update/delete operations; update the logic around reminder_id, child_op, and
modify call to first fetch/inspect the child's current parent reference (or its
parent ID field) and assert it matches the intended reminder_id (or that the
reminder's recordName from _reminder_record_name(reminder_id) is present in the
parent's child list) before proceeding; if it does not match, abort or raise an
error instead of performing CKModifyOperation on the child (i.e., validate
ownership and only include child_op in the _get_raw().modify operations when the
ownership check passes). Ensure the check uses the same record fields used
elsewhere (the child's parent reference / record_change_tag) and keep modify
atomicity intact by removing child_op when ownership validation fails.
---
Nitpick comments:
In `@tests/services/test_reminders_cloudkit.py`:
- Around line 1375-1451: Add negative tests that call delete_hashtag,
delete_attachment, and delete_recurrence_rule with a child object whose
reminder_id does not match the provided reminder.id and assert the service
rejects it (e.g., raises ValueError); for each test create a RemindersService
instance (mock svc._raw), construct a Reminder with id "Reminder/GOOD" and a
child object (Hashtag/Attachment/RecurrenceRule) whose reminder_id is
"Reminder/BAD", call the appropriate delete_* method and assert it raises the
expected exception and that svc._raw.modify was not invoked (reset/inspect
svc._raw.modify.call_count or assert not called) to ensure linkage-integrity is
enforced.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ae42e207-e4bd-41b5-a8ff-bd0e061abcb1
📒 Files selected for processing (3)
README.mdpyicloud/services/reminders/_writes.pytests/services/test_reminders_cloudkit.py
15fc0fb to
260af7f
Compare
| " %s: %s" | ||
| % (i, device.get("deviceName", "SMS to %s" % device.get("phoneNumber"))) |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information
| """SRP password.""" | ||
|
|
||
| def __init__(self, password: str) -> None: | ||
| self._password_hash: bytes = sha256(password.encode("utf-8")).digest() |
Check failure
Code scanning / CodeQL
Use of a broken or weak cryptographic hashing algorithm on sensitive data
| if resp := self._handle_drive_download(params): | ||
| return resp | ||
|
|
||
| if "icloud-content.com" in url and method == "GET": |
Check failure
Code scanning / CodeQL
Incomplete URL substring sanitization
| iterations = 2000 | ||
| key_length = 32 | ||
|
|
||
| password_digest: bytes = sha256(password.encode("utf-8")).digest() |
Check failure
Code scanning / CodeQL
Use of a broken or weak cryptographic hashing algorithm on sensitive data
| key_length = 32 | ||
|
|
||
| password_digest: bytes = sha256(password.encode("utf-8")).digest() | ||
| password_digest_hex: bytes = sha256(password.encode("utf-8")).hexdigest().encode() |
Check failure
Code scanning / CodeQL
Use of a broken or weak cryptographic hashing algorithm on sensitive data
|
@MrJarnould - Thanks for your effort on this, are you able to address the conflicts. I have attempted to rebase but appear to have failed. Thanks again |
a4fdfa7 to
5b05530
Compare
example_reminders_delta.py
Outdated
|
|
||
| print("Trusted devices:") | ||
| for index, device in enumerate(devices): | ||
| print(f" {index}: {_trusted_device_label(device)}") |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information
Breaking change
The Reminders service now uses a CloudKit-backed typed API instead of the previous legacy reminders endpoint.
If you currently rely on the old Reminders interface, you will need to migrate:
api.reminders.refresh()is no longer the read entrypoint. Reads are now on-demand through methods likelists(),reminders(),get(),list_reminders(),sync_cursor(), anditer_changes().api.reminders.post(...)is replaced by typed mutation methods such ascreate(),update(),delete(),add_location_trigger(),create_hashtag(),create_url_attachment(), andcreate_recurrence_rule().lists/collectionsdictionary-style access is replaced by typedRemindersList,Reminder,Alarm,Hashtag,Attachment, andRecurrenceRulemodels.This change was made to align the Reminders implementation with Apple's current CloudKit-backed backend, support richer reminder metadata and delta sync, and share infrastructure with the new Notes service.
Proposed change
This PR adds a new CloudKit-backed Notes service, completes the CloudKit-backed Reminders service work on this branch, and documents both services.
At a high level, this PR:
NotesServiceexposed asapi.notesRemindersServicepackage structure with typed read/write APIspyicloud.commonNotes service implementation
The new Notes service includes:
recents(),iter_all(),folders(),in_folder(),get(),sync_cursor(), anditer_changes()Note,NoteSummary,NoteFolder, andAttachmentmodelsrender_note()andexport_note()ExportConfig, includingarchivalvslightweightexport modesexamples/notes_cli.pyfor exercising note selection and export flowsReminders service implementation
The CloudKit-backed Reminders implementation includes:
lists(),reminders(),get(),list_reminders(),sync_cursor(), anditer_changes()create(),update(),delete()Reminder,RemindersList,Alarm,LocationTrigger,Hashtag,URLAttachment,ImageAttachment,RecurrenceRule, andReminderChangeEventmodelsType of change
Example of code:
Additional information
uv run pytest tests/test_notes.py tests/test_notes_rendering.py tests/test_notes_cli.py tests/services/test_reminders_cloudkit.pytests/services/test_reminders.pysuite still targets the pre-CloudKit Reminders API and does not reflect the new service surface introduced in this PR.Checklist
If user exposed functionality or configuration variables are added/changed:
Addresses #60 and #61