Skip to content

Add shapiq -> shap.Explanation bridge helper#292

Merged
adrian-prior merged 4 commits into
mainfrom
adrian/shap-bridge-helper
May 12, 2026
Merged

Add shapiq -> shap.Explanation bridge helper#292
adrian-prior merged 4 commits into
mainfrom
adrian/shap-bridge-helper

Conversation

@adrian-prior
Copy link
Copy Markdown
Collaborator

Summary

Adds a small bridge helper so users who want to compute Shapley values with shapiq (faster and TabPFN-friendly) but plot with the shap library (mature plotting ecosystem) don't have to write the 15-line "loop / stack / average baseline / construct Explanation" boilerplate themselves.

What changed

New modulesrc/tabpfn_extensions/interpretability/shap.py:

def shapiq_to_shap_explanation(
    explainer, X, *, budget, feature_names=None,
) -> shap.Explanation:
    ...

Mirrors the pattern in examples/interpretability/shap_example.py: one .explain(...) per row, stack the first-order arrays into (n, d), average baseline values, return a shap.Explanation. The import shap is inside the function body so the rest of the interpretability surface stays usable without shap installed.

That filename was freed up when #283 removed the legacy SHAP-based explainer; reusing it for the bridge helper maps naturally (shap.py = shap library integration, shapiq.py = shapiq library integration).

Re-exportinterpretability/__init__.py now exports shapiq_to_shap_explanation so the import is short:

from tabpfn_extensions.interpretability import shapiq_to_shap_explanation

Example shrunkexamples/interpretability/shap_example.py: the previous 15-line boilerplate collapses to a single shapiq_to_shap_explanation(...) call. The five shap.plots.* demonstrations are unchanged.

READMEsrc/tabpfn_extensions/interpretability/README.md: the paragraph that mentioned "or convert the values to shap.Explanation and use shap.plots.*" now names the helper explicitly and shows the one-liner, with the "use shapiq's native plots" path still presented as the alternative.

What this is not

  • Not adding shap to the interpretability extra. Same decision [RES-1467] Drop SHAP, refresh shapiq examples around v3 KV cache #283 made — shapiq is the runtime dep, shap is opt-in for plotting. Documented in the helper's docstring.
  • Not supporting higher-order interactions. shap.Explanation doesn't represent them; for those, users should keep using shapiq's native plots. Documented in the docstring.

Verification

Smoke-tested end-to-end in a clean venv (Python 3.12, shap + shapiq + tabpfn-extensions installed from local source) with a stub explainer that returns shapiq.InteractionValues. Checks:

Property Result
shapiq_to_shap_explanation(...) import path
Returned object is shap.Explanation
values.shape == (n, d)
base_values.shape == (n,)
feature_names passed through

🤖 Generated with Claude Code

Introduces `shapiq_to_shap_explanation(explainer, X, *, budget,
feature_names=None)` that runs a shapiq explainer over a batch of rows and
wraps the first-order Shapley values in a `shap.Explanation` ready for the
`shap.plots.*` family.

Lives in `src/tabpfn_extensions/interpretability/shap.py` — that filename
was freed up when #283 removed the legacy SHAP-based explainer, and the
new role ("shap library bridge for plotting") maps naturally to it.
Re-exported from `tabpfn_extensions.interpretability` so users can
import it directly.

`shap_example.py` collapses the previous 15-line "loop over rows / stack /
average baseline / construct Explanation" boilerplate to a single helper
call.

`interpretability/README.md` updated: the paragraph that mentioned
converting to `shap.Explanation` now names the helper explicitly and
shows a one-liner, while keeping the "use shapiq's native plots" option
visible as the alternative.

`shap` is intentionally NOT added to the `interpretability` extra (same
choice #283 made). The import lives inside the function body so the rest
of the interpretability surface stays importable without shap installed.

Verified end-to-end with a stub shapiq explainer in a clean venv that has
shap + shapiq + tabpfn-extensions installed: values/base_values/data
shapes correct, returned object is a real `shap.Explanation`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a bridge helper function, shapiq_to_shap_explanation, to facilitate the use of the SHAP library's plotting ecosystem with Shapley values computed via shapiq. The changes include the implementation of the helper in a new shap.py module, updates to the interpretability documentation, and a refactor of the SHAP example script to utilize this new utility. Feedback was provided to improve the robustness of the helper by handling empty input cases and using per-row baseline values instead of a global average to better support diverse explainer configurations.

Comment thread src/tabpfn_extensions/interpretability/shap.py
adrian-prior and others added 2 commits May 12, 2026 16:00
shap.Explanation accepts a 1-d (n,) array for base_values natively. Passing
per-row baselines straight through is more correct than averaging:

- For the imputation path (our default, get_tabpfn_imputation_explainer)
  baseline_value comes from the fixed background dataset and is identical
  across rows, so this matches the old behavior exactly.
- For the Rundel remove-and-recontextualize path baselines genuinely vary
  per row; averaging would silently lose that signal.

Also avoids the np.mean-on-empty RuntimeWarning Gemini flagged.

Skipping Gemini's other suggestion (explicit raise on n==0) — np.stack's
native ValueError is fine and adding the check is one more thing to keep.

Verified end-to-end locally by running the actual shap_example.py against
the v3 California-housing setup (n_explain=30, budget=2^8=256). All five
plots (summary, scatter, bar, beeswarm, waterfall) rendered correctly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous commit accidentally staged the test-plot artifacts I'd
generated under .local/shap_example_plots/ to verify the helper end-to-end.
Those shouldn't be on the PR. Removing them from the index and re-adding
.local/ to .gitignore (we'd un-ignored it during the v3 cleanup work and
no longer have a use for that path being tracked).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@adrian-prior adrian-prior marked this pull request as ready for review May 12, 2026 14:02
@adrian-prior adrian-prior requested a review from a team as a code owner May 12, 2026 14:02
@adrian-prior adrian-prior requested review from anuragg1209 and removed request for a team May 12, 2026 14:02
@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@adrian-prior adrian-prior requested a review from LeoGrin May 12, 2026 14:02
The previous commit on this branch re-added .local/ to .gitignore. That's
unrelated to the shap-bridge-helper change and just here because I'd
accidentally let pre-commit stage test-plot artifacts under .local/ in a
prior commit on this branch. The untrack-from-index part of that commit
is correct and stays — it cleans up the accidental staging. The
.gitignore line itself is reverted; .local/ stays as it was on main.

The local plot artifacts at .local/shap_example_plots/ have been deleted
from my working tree (they were never on this PR after 1fab4d0).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@adrian-prior adrian-prior merged commit 2bf9bfd into main May 12, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants