Skip to content

Releases: illuin-tech/colpali

ColQwen3.5 and transformers ModernVBERT

31 Mar 14:33
17b86cd

Choose a tag to compare

What's Changed

Full Changelog: v0.3.14...v0.3.15

0.3.14: transformers 5

24 Feb 11:28
b34c388

Choose a tag to compare

[0.3.14] - 2026-02-24

Added

  • Add ColQwen3 and BiQwen3 support (model + processor).
  • Add regression tests for ColPaliProcessor to validate Transformers v5 modality registration and fallback loading behavior when a processor bundle is incomplete.

Changed

  • Bump runtime compatibility to transformers>=5.0.0,<6.0.0, peft>=0.18.0,<0.19.0, and accelerate>=1.1.0,<2.0.0 and atest torch.
  • Update supported Python versions to >=3.10,<3.15 and align CI workflows to Python 3.10–3.14.
  • Update all affected processor subclasses (Qwen2/Qwen2.5/Qwen3, Gemma3, Idefics3, ModernVBert, Qwen2.5 Omni) to explicit __init__ modality signatures required by Transformers v5 ProcessorMixin.

Fixed

  • Fix ColPali/PaliGemma model loading under Transformers v5 by adapting wrapper internals to new module layout and tied-weights expectations.
  • Fix ColPali processor loading for checkpoints without a complete processor bundle by explicitly falling back to AutoImageProcessor + AutoTokenizer.
  • Fix ColPali collator image token id lookup to use convert_tokens_to_ids, compatible with Transformers v5 tokenizer backend changes.
  • Fix test collection on Python 3.14 by making tests an explicit package (tests/__init__.py).
  • Fix CI formatting failure by applying ruff format to updated ColPali processing tests.
  • Fix ColQwen2 and ColQwen2.5 initialization across Transformers versions by resolving hidden size from either config.hidden_size or config.text_config.hidden_size.
  • Call post_init() in ColIdefics3 and ColModernVBert to align model initialization with Transformers v5 expectations.
  • Improve VisualRetrieverCollator image token id resolution by preferring processor-level image_token_id when available.
  • Fix ColQwen2 and ColQwen2.5 LoRA checkpoint key remapping for custom_text_proj (base_model.model.* -> model keys) to avoid missing/unexpected adapter keys at load time.
  • Fix ColPali LoRA adapter key remapping for custom_text_proj (base_model.model.* -> model keys) and ignore expected missing model.lm_head.weight during load.
  • Fix ColModernVBert LoRA adapter key remapping for custom_text_proj (base_model.model.* -> model keys) to avoid missing/unexpected adapter keys at load time.
  • Fix ColQwen2.5-Omni LoRA adapter key remapping for custom_text_proj (base_model.model.* -> model keys) to avoid missing/unexpected adapter keys at load time.
  • Fix ColQwen3 LoRA adapter key remapping for custom_text_proj (base_model.model.* -> model keys) to avoid missing/unexpected adapter keys at load time.
  • Fix ColGemma3 LoRA adapter key remapping for custom_text_proj (base_model.model.* -> model keys) to avoid missing/unexpected adapter keys at load time.
  • Ensure adapter loading remains robust across Transformers v5 base-load and PEFT adapter-load code paths, preventing silent fallback to randomly initialized projection adapters in retrieval models.

Tests

  • Cover ColQwen3 processing and modeling with slow integration tests.
  • Run targeted non-slow processing tests for Gemma3, Idefics3, ModernVBert, Qwen2, Qwen2.5 and Qwen3 after the Transformers v5 processor-signature migration.
  • Run slow ColPali model-loading and query-forward integration tests under Transformers v5 to validate end-to-end loading behavior.
  • Expand adapter checkpoint key remapping regression tests to cover ColPali, ColGemma3, ColQwen2, ColQwen2.5, ColQwen3, ColQwen2.5-Omni and ColModernVBert, including registry-backed conversion checks where needed.

v0.3.13: ModernVBert

15 Nov 18:37
174055b

Choose a tag to compare

[0.3.13] - 2025-11-15

Added

  • Add ModernVBERT to the list of supported models

Fixed

  • Fix multi hard negatives training
  • Fix multi dataset sampling in order to weight probability of being picked by the size of the dataset

Changed

  • Bump transformer, torch and peft support

v0.3.12

16 Jul 10:16

Choose a tag to compare

[0.3.12] - 2025-07-16

Added

  • Video processing for ColQwen-Omni

Fixed

  • Fixed loading of PaliGemma and ColPali checkpoints (bug introduced in transformers 4.52)
  • Fixed loading of SmolVLM (Idefics3) processors that didn't transmit image_seq_len (bug introduced in transformers 4.52)

v0.3.11

04 Jul 16:23
0fcbe49

Choose a tag to compare

[0.3.11] - 2025-07-04

Added

  • Added BiIdefics3 modeling and processor.
  • [Breaking] (minor) Remove support for context-augmented queries and images
  • Uniform processor docstring
  • Update the collator to align with the new function signatures
  • Add a process_text method to replace the process_query one. We keep support of the last one for the moment, but we'll deprecate it later
  • Introduce the ColPaliEngineDataset and Corpus class. This is to delegate all data loading to a standard format before training. The concept is for users to override the dataset class if needed for their specific usecases.
  • Added smooth_max option to loss functions
  • Added weighted in_batch terms for losses with hard negatives
  • Added an option to filter out (presumably) false negatives during online training
  • Added a training script in pure torch without the HF trainer
  • Added a sampler to train with multiple datasets at once, with each batch coming from the same source. (experimental, might still need testing on multi-GPU)
  • Adds score normalization to LI models (diving by token length) for betetr performance with CE loss
  • Add experimental PLAID support

Changed

  • Stops pooling queries between GPUs and instead pools only documents, enabling training with way bigger batch sizes. We recomment training with accelerate launch now.
  • Updated loss functions for better abstractions and coherence between the various loss functions. Small speedups and less memory requirements.

v0.3.10: minor updates & dependency bumps

18 Apr 16:51

Choose a tag to compare

[0.3.10] - 2025-04-18

Added

  • Add LambdaTokenPooler to allow for custom token pooling functions.
  • Added training losses with negatives to InfoNCE type losses

Changed

  • Fix similarity map helpers for ColQwen2 and ColQwen2.5.
  • [Breaking] (minor) Remove support for Idefics2-based models.
  • Disable multithreading in HierarchicalTokenPooler if num_workers is not provided or is 1.
  • [Breaking] (minor) Make pool_factor an argument of pool_embeddings instead of a HierarchicalTokenPooler class attribute
  • Bump dependencies for transformers, torch, peft, pillow, accelerate, etc...

v0.3.9

03 Apr 16:01
5b1b912

Choose a tag to compare

Added

  • Allow user to pass custom textual context for passage inference
  • Add ColQwen2.5 support and BiQwen2.5 support
  • Add support for token pooling with HierarchicalTokenPooler.
  • Allow user to specify the maximum number of image tokens in the resized images in ColQwen2Processor and ColQwen2_5_Processor.

Changed

  • Warn about evaluation being different from Vidore, and do not store results to prevent confusion.
  • Remove duplicate resize code in ColQwen2Processor and ColQwen2_5_Processor.
  • Simplify sequence padding for pixel values in ColQwen2Processor and ColQwen2_5_Processor.
  • Remove deprecated evaluation (CustomRetrievalEvaluator) from trainer
  • Refactor the collator classes
  • Make processor input compulsory in ColModelTrainingConfig
  • Make BaseVisualRetrieverProcessor inherit from ProcessorMixin
  • Remove unused tokenizer field from ColModelTrainingConfig
  • Bump transformers to 4.50.0 and torch to 2.6.0 to keep up with the latest versions. Note that this leads to errors on mps until transformers 4.50.4 is released.

v0.3.8

29 Jan 09:17
59e94a9

Choose a tag to compare

Description

Fix dependencies in colpali-engine[train] and reorganize tests.

Features

Fixed

  • Fix peft version in colpali-engine[train]
  • Loosen upper bound for accelerate

Tests

  • Reorganize modeling tests
  • Add test for ColIdefics3 (and ColSmol)

v0.3.7

28 Jan 14:29
abe8fa0

Choose a tag to compare

Description

Add support for colSmol-256M and colSmol-500M.

Features

Changed

  • Bump transformers to 4.47 to support colSmol-256M and colSmol-500M

Fixed

  • Fix checkpoints used for ColQwen2 tests

v0.3.6

10 Jan 14:29
0bf8c4d

Choose a tag to compare

Description

Loosen default dependencies, but keep stricter dep ranges for the train dependency group.

Features

Added

  • Add expected scores in ColPali E2E test

Changed

  • Loosen package dependencies

Full Changelog: v0.3.5...v0.3.6