TabPFN is a foundation model for tabular data that outperforms traditional methods while being dramatically faster. This client library provides easy access to the TabPFN API, enabling state-of-the-art tabular machine learning in just a few lines of code.
Tip
Dive right in with our interactive Colab notebook! It's the best way to get a hands-on feel for TabPFN, walking you through installation, classification, and regression examples.
This API is now in a stable release. It has been extensively tested and is used across multiple use cases. While we continue to make improvements, the core service is reliable for day-to-day use. Please reach out to us if you encounter any stability issues.
This is a cloud-based service: your data will be sent to our servers for processing.
Please only upload data you have permission to share, and avoid sensitive, confidential, or personally identifiable information. Consider anonymizing or pseudonymizing your data in line with your organization’s policies.
Choose the right TabPFN implementation for your needs:
- TabPFN Client (this repo): Easy-to-use API client for cloud-based inference
- TabPFN Extensions: Community extensions and integrations
- TabPFN: Core implementation for local deployment and research
- TabPFN UX: No-code TabPFN usage
pip install --upgrade tabpfn-clientfrom tabpfn_client import init, TabPFNClassifier, TabPFNRegressor
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
# Load an example dataset
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# Use it like any sklearn model
model = TabPFNClassifier()
model.fit(X_train, y_train)
# Get predictions
predictions = model.predict(X_test)
# Get probability estimates
probabilities = model.predict_proba(X_test)Thinking mode trades extra fit-time compute for higher predictive quality. The server explores additional configurations during fit() and returns a tuned model; predict() then runs as usual.
from tabpfn_client import TabPFNClassifier
# Simplest form: enable with defaults (effort="medium").
model = TabPFNClassifier(thinking_mode=True)
model.fit(X_train, y_train)
model.predict(X_test)Knobs:
thinking_mode: bool = False— enable thinking with default effort. Equivalent tothinking_effort="medium".thinking_effort: {"medium", "high"} | None— effort level. Setting this also enables thinking, sothinking_mode=Trueis optional when you've set the level explicitly.thinking_timeout_s: float | None— budget for the fit, in seconds. Only consulted when thinking is enabled. Capped at 2400 (40 minutes).thinking_metric: str | None— optimization metric for the fit. Only consulted when thinking is enabled. See the constructor docstring ofTabPFNClassifier/TabPFNRegressorfor the full list of supported metrics per task (classification, multiclass, regression) and their aliases.
model = TabPFNClassifier(
thinking_effort="high",
thinking_timeout_s=600,
thinking_metric="roc_auc",
)Notes:
- Thinking mode is only supported on v3 models. Leave
model_pathat its default ("auto", which lets the server pick the latest default — currently a v3 model) or set it explicitly to a v3 model. Combining thinking with a v2 or v2.5model_pathraisesValueErrorclient-side. thinking_timeout_sandthinking_metricare only consulted when thinking is enabled; passing them withoutthinking_mode=Trueorthinking_effort=...raisesValueError.- Thinking-mode fits take longer than regular fits (often several minutes).
- Thinking-mode fits draw from a separate, smaller budget than regular fits — they do not count against your regular prediction allowance, and you cannot use your regular allowance for them. The number of thinking-mode fits you can run per day is limited. If you need more capacity, request an increase via ux.priorlabs.ai.
import tabpfn_client
token = tabpfn_client.get_access_token()and login (on another machine) using your access token, skipping the interactive flow, use:
tabpfn_client.set_access_token(token)We're building the future of tabular machine learning and would love your involvement! Here's how you can participate and get help:
- Try TabPFN: Use it in your projects and share your experience
- Connect & Learn:
- Join our Discord Community for discussions and support
- Read our Documentation for detailed guides
- Check out GitHub Issues for known issues and feature requests
- Contribute:
- Report bugs or request features through issues
- Submit pull requests (see development guide below)
- Share your success stories and use cases
- Stay Updated: Star the repo and join Discord for the latest updates
Each API request consumes usage credits; the cost grows with the number of rows and columns in your dataset. You can check your current usage at ux.priorlabs.ai/api/usage.
Track your API usage through response headers:
X-RateLimit-Limit: Your total allowed usageX-RateLimit-Remaining: Remaining usageX-RateLimit-Reset: Reset timestamp (UTC)
Usage limits reset daily at 00:00:00 UTC.
Per-model size limits (rows, columns, cells, classes) are enforced by the server and are returned from /tabpfn/get_model_limits. The client validates against the most permissive limit at fit time and against the selected model's limit at predict time, raising ValueError before the request is sent.
In particular, regression with output_type="full" has a stricter cap on the number of test rows than regular regression predictions; split the test set across calls if you hit it.
These limits will be increased in future releases.
You can use our UserDataClient to access and delete personal information.
from tabpfn_client import UserDataClient
print(UserDataClient.get_data_summary())You can read our paper explaining TabPFNv2 here, and the model report of TabPFN-2.5 here.
BibTeX
@misc{grinsztajn2025tabpfn,
title={TabPFN-2.5: Advancing the State of the Art in Tabular Foundation Models},
author={Léo Grinsztajn and Klemens Flöge and Oscar Key and Felix Birkel and Philipp Jund and Brendan Roof and
Benjamin Jäger and Dominik Safaric and Simone Alessi and Adrian Hayler and Mihir Manium and Rosen Yu and
Felix Jablonski and Shi Bin Hoo and Anurag Garg and Jake Robertson and Magnus Bühler and Vladyslav Moroshan and
Lennart Purucker and Clara Cornu and Lilly Charlotte Wehrhahn and Alessandro Bonetto and
Bernhard Schölkopf and Sauraj Gambhir and Noah Hollmann and Frank Hutter},
year={2025},
eprint={2511.08667},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2511.08667},
}
@article{hollmann2025tabpfn,
title={Accurate predictions on small data with a tabular foundation model},
author={Hollmann, Noah and M{\"u}ller, Samuel and Purucker, Lennart and
Krishnakumar, Arjun and K{\"o}rfer, Max and Hoo, Shi Bin and
Schirrmeister, Robin Tibor and Hutter, Frank},
journal={Nature},
year={2025},
month={01},
day={09},
doi={10.1038/s41586-024-08328-6},
publisher={Springer Nature},
url={https://www.nature.com/articles/s41586-024-08328-6},
}
@inproceedings{hollmann2023tabpfn,
title={TabPFN: A transformer that solves small tabular classification problems in a second},
author={Hollmann, Noah and M{\"u}ller, Samuel and Eggensperger, Katharina and Hutter, Frank},
booktitle={International Conference on Learning Representations 2023},
year={2023}
}This project is licensed under the Apache License 2.0 — see the LICENSE file for details.
Setup, build, and release instructions
To encourage better coding practices, ruff has been added to the pre-commit hooks. This will ensure that the code is formatted properly before being committed. To enable pre-commit (if you haven't), run the following command:
pre-commit installAdditionally, it is recommended that developers install the ruff extension in their preferred editor. For installation instructions, refer to the Ruff Integrations Documentation.
git clone https://github.com/PriorLabs/tabpfn-client
cd tabpfn-client
git submodule update --init --recursive
pip install -e .
cd ..NOTE: For development, you will need to download some additional dev dependencies. Use the below command to get it ready for development and running tests.
pip install -e ".[dev]"-
First ensure you've bumped the version in pyproject.toml. Use an rc suffix until you're sure it works. Something like x.y.zrc1.
-
Build, upload to the test PyPI, install and run a quick test.
Note: Assumes a working uv install + venv.
rm -rf ~/tabpfn-client-test.tmp dist
uv pip install --upgrade build && python -m build
uv pip install --upgrade twine && python -m twine upload --repository testpypi dist/*
# Use a separate directory for testing so we don't accidentally run the local code
mkdir ~/tabpfn-client-test.tmp && cp tests/quick_test.py ~/tabpfn-client-test.tmp && cp tests/quick_test_reasoning.py ~/tabpfn-client-test.tmp && cd ~/tabpfn-client-test.tmp
uv venv && source .venv/bin/activate
# We use --pre for the rc version and --no-deps because TestPyPI dependencies are unreliable.
pip3 download --pre --index-url https://test.pypi.org/simple/ --no-deps tabpfn-client
uv pip install *.whl
python quick_test.py-
Return to this repo. Correct the version. Ideally this should be what is in main. It shouldn't have an rc suffix unless we're doing broader pre-release testing.
-
Build, upload to the real PyPI, install and run a quick test.
rm -rf ~/tabpfn-client-test.tmp dist
uv pip install --upgrade build && python -m build
uv pip install --upgrade twine && python -m twine upload --repository pypi dist/*
# Use a separate directory for testing so we don't accidentally run the local code
mkdir ~/tabpfn-client-test.tmp && cp tests/quick_test.py ~/tabpfn-client-test.tmp && cp tests/quick_test_reasoning.py ~/tabpfn-client-test.tmp && cd ~/tabpfn-client-test.tmp
uv venv && source .venv/bin/activate
# We use --pre in case you intend to push an rc version.
uv pip install -U --pre tabpfn-client
python quick_test.pyBuilt with ❤️ by the TabPFN community