Skip to content

[script.punchplay] 1.0.0#2811

Merged
romanvm merged 1 commit intoxbmc:omegafrom
PunchPlay:add/script.punchplay
Mar 27, 2026
Merged

[script.punchplay] 1.0.0#2811
romanvm merged 1 commit intoxbmc:omegafrom
PunchPlay:add/script.punchplay

Conversation

@PunchPlay
Copy link
Copy Markdown

Description

New add-on submission. PunchPlay Scrobble automatically tracks movies and TV episodes watched in Kodi and sends them to your PunchPlay.tv account in real time. Features device-code OAuth login, offline queuing with SQLite, heartbeat progress reporting, anime detection via genre tags, and a filename parser fallback.

Source: https://github.com/PunchPlay/script.punchplay
License: GPL-2.0-only
Tested on: Kodi 21 (Omega) macOS

Checklist:

  • My code follows the add-on rules and piracy stance of this project.
  • I have read the CONTRIBUTING document
  • Each add-on submission should be a single commit with using the following style: [script.punchplay] 1.0.0

@kodiai
Copy link
Copy Markdown

kodiai bot commented Mar 19, 2026

Kodiai Review Summary

What Changed

New addon submission for PunchPlay Scrobble - automatically tracks movies and TV episodes watched in Kodi and sends them to PunchPlay.tv account.

Reviewed: core logic, docs

Strengths

  • ✅ Comprehensive offline queue implementation with SQLite fallback ensures scrobbles aren't lost during network failures (cache.py:107-127)
  • ✅ Proper secure file permissions (0o600) for sensitive token and device ID files (api.py:57-59, 73-75)

Observations

Impact

[CRITICAL] script.punchplay/api.py (450): 4xx errors silently dropped during queue flush
During queue flush, the code drops ALL 4xx errors (api.py:450), but 401 errors should trigger token refresh before deletion. A queued scrobble with a stale token will be permanently lost instead of retried after refresh.

[MAJOR] script.punchplay/player.py (324-326): Potential crash from None season/episode values
When formatting the scrobble notification message, season and episode values can be None (per identifier.py:207-208), but the code uses f-string formatting with :02d which will crash on None values.

[MAJOR] script.punchplay/service.py (79, 84): Silent timestamp update on flush_queue exception
When flush_queue() raises an exception (service.py:81-83), the code still updates self._last_flush (line 84), preventing retry for 60 seconds.

Verdict

🔴 Address before merging -- 3 blocking issue(s) found

Review Details
  • Files reviewed: 14
  • Lines changed: +1499 -0
  • Profile: minimal (auto, lines changed: 1499)
  • Author: newcomer (adapted tone)
  • Findings: 1 critical, 2 major, 0 medium, 0 minor (includes 3 from summary observations)
  • Review completed: 2026-03-19T00:12:22.133Z
  • Keyword parsing:
    • focus hints: [SCRIPT.PUNCHPLAY]

@PunchPlay PunchPlay force-pushed the add/script.punchplay branch from 137172a to d64fd6f Compare March 19, 2026 00:20
@PunchPlay
Copy link
Copy Markdown
Author

Kodiai Review Summary

@kodiai can you re review?

@kodiai
Copy link
Copy Markdown

kodiai bot commented Mar 19, 2026

kodiai response

Fresh review complete. The code is well-structured overall with solid error handling patterns, but has three issues that should be addressed:

Critical Issues

(1) api.py:164_should_drop_client_error() incorrectly treats 401 as retriable

The function returns True for all 4xx errors except 401, meaning it says "drop when code != 401". However, the logic in flush_queue() (api.py:231) uses this to drop errors, so when the function returns True, the event is deleted. This means 401 errors are currently kept in the queue during replay, which is correct.

But the issue is in the initial post() method (api.py:190): when a fresh scrobble gets a 401, the condition not self._should_drop_client_error(exc.code) evaluates to not False = True for 401, so it queues the event. This is wrong because _request() already handles 401 by refreshing the token (api.py:116-121). If a 401 persists after the built-in refresh, it means the user is logged out and should re-authenticate — these events should remain queued.

The real problem: the function name and logic are confusing. It should clearly separate "permanent client errors that should be dropped" (400, 403, 404, 422) from "auth errors that need user action" (401).

Recommendation: Rename to _is_permanent_client_error() and return 400 <= status_code < 500 and status_code != 401. Update call sites to match: "if permanent, drop; else queue."

(2) service.py:79-85 — Silent queue flush failures

When flush_queue() raises an exception, it's logged at DEBUG level and _last_flush is not updated. This means:

  • If the network is down, flush will be retried every second (the waitForAbort(1) interval)
  • If there's a persistent error (e.g., malformed queued payload), flush will fail silently forever
  • The user has no visibility that scrobbles are stuck

The try/except catches all exceptions, including ConnectionError that flush_queue() already handles internally. This double-wrapping makes failures invisible.

Recommendation: Remove the try/except and let exceptions propagate, or at minimum log at WARNING level and update _last_flush regardless to avoid retry storms.

(3) player.py:174-176 — Heartbeat thread doesn't exit cleanly on player errors

When getTime() or getTotalTime() throws (e.g., player was closed externally), the exception is caught and the code checks isPlayingVideo() to decide whether to return. However, if isPlayingVideo() also throws or returns True (stale state), the heartbeat loop continues spinning and logging warnings every 30 seconds until Kodi shuts down.

Recommendation: After catching an exception, always call _hb_stop.set() to signal termination, then return. This ensures the thread exits cleanly rather than continuing to poll.

Minor Observations

(4) cache.py:39-40 — WAL mode pragma executed on every connection

conn.execute("PRAGMA journal_mode=WAL") is called in _connect(), which is invoked for every database operation. WAL mode is persistent across connections, so this is redundant after the first call. Not a correctness issue, but unnecessary overhead.

Optional: Move WAL pragma to _init_db() and execute it once.

(5) identifier.py:92-97 — Anime detection uses broad "animation" genre

The is_anime() function treats any item with "animation" genre as anime (identifier.py:95). This means Western cartoons, Pixar movies, etc. are misclassified as anime and subject to the scrobble_anime setting.

Optional: If the intent is to distinguish Japanese animation specifically, consider checking for both "anime" and "animation" separately, or rely solely on "anime" genre tag. If the current behavior is intentional, document it clearly.


All issues include exact file:line references. The critical issues should be fixed before merge; the minor observations are optional improvements.

@PunchPlay PunchPlay force-pushed the add/script.punchplay branch 2 times, most recently from 264fc6c to 8301a2f Compare March 19, 2026 13:36
@PunchPlay
Copy link
Copy Markdown
Author

kodiai response

@kodiai please review again. Should all be resolved now and readme updated.

Copy link
Copy Markdown

@kodiai kodiai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kodiai response

Decision: APPROVE

Issues: none

@romanvm
Copy link
Copy Markdown

romanvm commented Mar 21, 2026

Please fix the icon

@PunchPlay PunchPlay force-pushed the add/script.punchplay branch from abe1344 to fad36a7 Compare March 22, 2026 04:33
@PunchPlay
Copy link
Copy Markdown
Author

PunchPlay commented Mar 22, 2026

Please fix the icon

Hey @romanvm, should be all good now.

@romanvm romanvm added the Approved Approved and is ready to merge label Mar 27, 2026
@romanvm romanvm merged commit 86109a4 into xbmc:omega Mar 27, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved Approved and is ready to merge

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants