Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitlab-ci.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
include:
project: rationai/digital-pathology/templates/ci-templates
file: Python-Lint.gitlab-ci.yml
file:
- Python-Lint.gitlab-ci.yml
- MkDocs.gitlab-ci.yml

stages:
- lint
- deploy
135 changes: 76 additions & 59 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,93 +1,110 @@
# Model Service

Model deployment infrastructure for RationAI using Ray Serve on Kubernetes.

This repository contains:

## Getting started
- A KubeRay `RayService` manifest (`ray-service.yaml`) for deploying Ray Serve on Kubernetes.
- Model implementations under `models/` (reference: `models/binary_classifier.py`).
- Documentation under `docs/` (MkDocs).

To make it easy for you to get started with GitLab, here's a list of recommended next steps.
## Documentation

Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
- MkDocs content: `docs/`
- Key pages:
- `docs/get-started/quick-start.md`
- `docs/guides/deployment-guide.md`
- `docs/guides/adding-models.md`
- `docs/guides/configuration-reference.md`
- `docs/guides/troubleshooting.md`
- `docs/architecture/overview.md`
- `docs/architecture/request-lifecycle.md`
- `docs/architecture/queues-and-backpressure.md`
- `docs/architecture/batching.md`

## Add your files
## Quick Start (Kubernetes)

- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/topics/git/add_files/#add-files-to-a-git-repository) or push an existing Git repository with the following command:
Full walkthrough: `docs/get-started/quick-start.md`.

```
cd existing_repo
git remote add origin https://gitlab.ics.muni.cz/rationai/infrastructure/model-service2.git
git branch -M master
git push -uf origin master
```
### Prerequisites

## Integrate with your tools
- Kubernetes cluster with KubeRay operator installed
- `kubectl` configured for the cluster

- [ ] [Set up project integrations](https://gitlab.ics.muni.cz/rationai/infrastructure/model-service2/-/settings/integrations)
### Deploy

## Collaborate with your team
```bash
kubectl apply -f ray-service.yaml -n [namespace]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The [namespace] placeholder is used in several kubectl commands in this README (e.g., lines 37, 38, and 44). For clarity, especially for new users, it would be beneficial to add a note explaining that this needs to be replaced with their target Kubernetes namespace, and provide an example like rationai-notebooks-ns. This is handled well in the docs/get-started/quick-start.md file.

kubectl get rayservice rayservice-models -n [namespace]
```

- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/user/project/merge_requests/auto_merge/)
### Access locally

## Test and Deploy
```bash
kubectl port-forward -n [namespace] svc/rayservice-models-serve-svc 8000:8000
```

Use the built-in continuous integration in GitLab.
### Test the reference model (`BinaryClassifier`)

- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
The reference deployment in `ray-service.yaml` exposes an app at the route prefix:

***
- `/prostate-classifier-1`

# Editing this README
`models/binary_classifier.py` expects a **request body that is LZ4-compressed raw bytes** of a single RGB tile:

When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
- dtype: `uint8`
- shape: `(tile_size, tile_size, 3)`
- byte order: row-major (NumPy default)

## Suggestions for a good README
Example (Python):

Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
```bash
pip install numpy lz4 requests
```

## Name
Choose a self-explaining name for your project.
```python
import lz4.frame
import numpy as np
import requests

## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
tile_size = 512 # must match RayService user_config.tile_size
tile = np.zeros((tile_size, tile_size, 3), dtype=np.uint8)

## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
payload = lz4.frame.compress(tile.tobytes())

## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
resp = requests.post(
"http://localhost:8000/prostate-classifier-1/",
data=payload,
headers={"Content-Type": "application/octet-stream"},
timeout=60,
Comment on lines +76 to +79
)
resp.raise_for_status()
print(resp.json() if resp.headers.get("content-type", "").startswith("application/json") else resp.text)
```

## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Repository Structure

## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
```
model-service/
├── models/ # Model implementations
│ └── binary_classifier.py
├── providers/ # Model loading providers
│ └── model_provider.py
├── docs/ # Documentation
├── ray-service.yaml # Kubernetes RayService configuration
├── pyproject.toml # Python dependencies
└── README.md
```

## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.

## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.

## Contributing
State if you are open to contributions and what your requirements are for accepting them.
- **Issues:** Report bugs or request features via [GitLab Issues](https://gitlab.ics.muni.cz/rationai/infrastructure/model-service/-/issues)
- **Contact:** RationAI team at Masaryk University

For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.

You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## License

## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
This project is part of the RationAI infrastructure and is available for use by authorized members of the RationAI group.

## License
For open source projects, say how it is licensed.
## Authors

## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
Developed and maintained by the RationAI team at Masaryk University, Faculty of Informatics.
82 changes: 82 additions & 0 deletions docs/architecture/batching.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Batching (How It Works Under the Hood)

Batching in Ray Serve is a **replica-local request coalescing** mechanism.

It improves throughput when your model can process multiple inputs more efficiently together (common for GPU inference).

## Where batching happens

Batching happens **inside each replica process**.

Requests only become eligible for batching after they:

1. enter through the proxy and handle queueing/backpressure, and
2. get routed to a specific replica

See also: **[Request lifecycle](request-lifecycle.md)**.

## The API surface (what you configure)

In user code, batching is enabled by decorating an **async** method with `@serve.batch`:

- `max_batch_size`: upper bound for how many requests are grouped into one batch execution
- `batch_wait_timeout_s`: maximum time to wait (since the first queued item) before flushing a smaller batch

Serve expects the batched handler to return **one result per input** (same batch length, same order).

## What Serve actually does internally

Conceptually, each replica maintains an internal structure like:

- an in-memory buffer of pending calls
- a background “flush” loop that decides when to execute a batch
- per-request futures/promises that get completed when the batch finishes

### 1. Collection phase (buffering)

Incoming requests that hit the batched method are appended to a replica-local buffer.

Each buffered entry stores:

- the request arguments (or decoded payload)
- a future representing that request’s eventual response

### 2. Flush conditions (size or time)

The buffer is flushed when either condition becomes true:

- **Size trigger**: buffer length reaches `max_batch_size`
- **Time trigger**: `batch_wait_timeout_s` elapses since the **first** item currently in the buffer

This is why batching can increase latency at low QPS: a request may wait up to `batch_wait_timeout_s` for more arrivals.

### 3. Execution phase (single call)

Serve invokes your batched handler **once** with a list of inputs.

This is where you typically vectorize:

- stack/concat tensors
- run one forward pass
- split/scatter outputs back

### 4. Scatter phase (complete futures)

When the batched handler returns a list of outputs, Serve resolves the stored futures in order.

Each original HTTP request then completes independently with its corresponding output.

## Configuration & Tuning

For a deep dive into how batching interacts with concurrency limits (specifically why `max_ongoing_requests` must be larger than `max_batch_size`), see **[Queues and backpressure](queues-and-backpressure.md)**.

Quick tips:

- Increase `max_batch_size` if the model benefits from larger batches and you have headroom.
- Increase `batch_wait_timeout_s` to favor fuller batches; decrease it to favor latency.

## Next

- Request flow including queue points: [Request lifecycle](request-lifecycle.md)
- Queueing and rejection controls: [Queues and backpressure](queues-and-backpressure.md)
- “Knobs” reference and meanings: [Configuration reference](../guides/configuration-reference.md)
Loading
Loading