Skip to content

cuda: restrict out_prod support to f32 inputs#110

Open
GuthL wants to merge 92 commits intotetherto:masterfrom
GuthL:fix/cuda-out-prod-support-guard
Open

cuda: restrict out_prod support to f32 inputs#110
GuthL wants to merge 92 commits intotetherto:masterfrom
GuthL:fix/cuda-out-prod-support-guard

Conversation

@GuthL
Copy link
Copy Markdown

@GuthL GuthL commented Mar 19, 2026

Summary

This narrows CUDA backend support for GGML_OP_OUT_PROD so the op is only offloaded when both sources are actually F32.

case GGML_OP_OUT_PROD:
    {
        const ggml_tensor * src0 = op->src[0];
        const ggml_tensor * src1 = op->src[1];
        return op->type == GGML_TYPE_F32 &&
            src0 != nullptr && src1 != nullptr &&
            src0->type == GGML_TYPE_F32 &&
            src1->type == GGML_TYPE_F32;
    } break;

Why

ggml_backend_cuda_device_supports_op() currently advertises CUDA support for GGML_OP_OUT_PROD whenever op->type == GGML_TYPE_F32.

That is broader than what the current CUDA out_prod path safely handles. In BitNet LoRA training, mixed-type out_prod nodes can be scheduled onto CUDA even though the CUDA implementation is not robust for those source types.

I hit this while training Bitnet_B1_58 Xl with llama-finetune-lora on an RTX A6000.

Reproduction / failure mode

With the current code, the run consistently reached:

  • Training split: datapoints=32055, batches_per_epoch=30452
  • Starting epoch 0 (step 0, lr=...)

and then died inside CUDA out_prod with:

** On entry to SGEMM parameter number 8 had an illegal value
CUDA error: an unsupported value or parameter was passed to the function
... ggml_cuda_out_prod ... out-prod.cu:113

Validation

After applying this support-guard fix locally, the same BitNet training path no longer crashed at the previous out_prod boundary.

A smoke run on the same machine got past:

  • Loaded 5550 conversations
  • Training split: datapoints=5550, batches_per_epoch=5272
  • Starting epoch 0 (step 0, lr=...)
  • Checkpointing enabled, saving every 500 steps

with llama-finetune-lora still alive after the old crash point.

Notes

  • This aligns the support gate with what the CUDA backend is actually handling safely today.
  • If mixed-type CUDA out_prod is intended longer term, I think the support check should stay narrow until that implementation is complete.
  • I intended to open a matching issue first, but issues are disabled on this repository, so I’m putting the full report in the PR body.

olyasir and others added 30 commits August 15, 2025 09:05
* sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (#13973)

* ggml : do not output unprintable characters on GGUF load failure (#14381)

* ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317)

* ggml-cpu: add nnpa compile flag

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 4a9f60c201573128f73a65999b3e5cc497fae5c1)

* ggml-cpu: add fp16->fp32 nnpa first

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 8d4a7987f9c1887f716be96250f2caeee0253929)

* ggml-cpu: add fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 0ff0d6516247a41d2ade42b42cf0d676a4dd1627)

* ggml-cpu: better variable names

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 2f58bbcbb89c183340e252362b2a40651f573f1f)

* docs: update s390x docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 01b929491b50071a5d0572235dcf5a449da70aa7)

* ggml-cpu: add debugging prints to see if dlf16 is correct

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix print vs printf

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix float placeholder

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: ensure fp16 and fp32 load and stores are called

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fp16 load ensured to hit

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove sigint from fp16 store

for some reason, the function is not getting a hit when debugged with
    gdb. we will need to investigate further

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa for ggml_cpu_fp16_to_fp32

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: nnpa activate ggml_cpu_fp16_to_fp32 for 8 elements

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: nnpa switch to vec_xst test

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to vec_xst for 4 element loops also

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rework noop

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove noop, general code cleanup

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarify variable naming

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa for ggml_cpu_fp32_to_fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add breakpoint for debugging

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: test fix for conversion failure

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: disable fp32->fp16 nnpa conversions for now

there are some conversion failures in nnpa that requires the eyes of an
ibm stsm. will create a separate pr to introduce the fp32->fp16 change.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to elif macro

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: reattempt fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix compiler types

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: change to typedef vector types

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add 4 element loops for fp32->fp16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarified vector naming

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back fp32->fp16 store nnpa

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: activate nnpa fp32->fp16 or fp16->fp32 compute

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add nnpa macro check in ggml-impl

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add missing __func__

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: diagnose why __NNPA__ macro is not being defined

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: import vecintrin.h to fix compiler errors

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: update macro tests

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 157f856c34589566151630e294563a420702db39.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to importing ggml-cpu-impl instead

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix macro declaration

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: test more macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add debug prints

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bruteforce macro definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move macro definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add ggml-impl.h to cmakelists

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to private macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move s390x typedef to own header file

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 157f856c34589566151630e294563a420702db39)

* ggml-cpu: move things around

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back compile macros

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: switch to quotes for import

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add compiler error macro

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add s390x detection in ggml-src

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back compile definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: undo cmakelists work

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move s390x typedef to own header file"

This reverts commit 18d79e1a30b39d9aaa0bd58400c5cf2c32135c9a.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove typedefs.h

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove typedef from cmakelists

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add ggml-impl.h future notes

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: add todo comment for future reference

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: clarify naming of dlf16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove unnecessary target compile definitions

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move nnpa fp16->fp32 and fp32->fp16 to simd-mappings

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: update broken huggingface link for s390x

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix duplicate func names during compile

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: fix duplicate func names during compile"

This reverts commit fbb733451f27677063b914d4f6c9a9841d45b38d.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml: refactor fp32->fp16 and fp16->fp32 simd to ggml-cpu"

This reverts commit bd288e8fa52b5244f65cee21cb61062f1a9e0ca5.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: refactor fp16<->fp32 simd to ggml-cpu

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix missing simd-mappings.h import in quants.c

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix missing simd-mappings.h within repack

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix amx mmq missing simd-mappings.h

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: attempt at fixing loongarch failing build

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move nnpa together with other fp16<->fp32 simd

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: fix wrong refactor of ggml-base

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164176555

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: remove dependency on ggml-cpu from ggml-base

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: rename all fp16<->fp32 macros to prefix with ggml_cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164449406

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: remove mistaken fallback macro

fallback logic was already implemented but i was too sleepy to realise

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: move ggml_table_f32_f16 back to ggml-base due to ci failures"

This reverts commit 32a3533564bdb7902cefb9c89b1c9e956a81ce29.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml: move ggml_table_f32_f16 to ggml-cpu"

This reverts commit 9e40d984ad27d7b60392fb2b7548885201864fe4.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml: move ggml_table_f32_f16 to ggml-cpu

ref: https://github.com/ggml-org/llama.cpp/pull/14317#discussion_r2164775006

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 9e40d984ad27d7b60392fb2b7548885201864fe4)

* ggml: move ggml_table_f32_f16 to ggml-cpu.c

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: extern c ggml_table_f32_f16 + chore docs

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h

we rely on the variable declaration in ggml-cpu.c instead

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: dedup ggml_table_f32_f16 from simd-mappings.h"

This reverts commit f71b21d2f74f5e03ec0c2b4fefd3cbf395aecf16.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* ggml-cpu: bring back ggml_table_f32_f16

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* Revert "ggml-cpu: bring back ggml_table_f32_f16"

This reverts commit 2dce119178bed5ef5c8398c4230ddd14fef80e49.

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* fix ggml time initialization

* fix f32_f16 table init

* remove extra line

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: slaren <slarengh@gmail.com>

* musa: enable fp16 mma (all) and cublas on qy2 (#13842)

* musa: enable fp16 mma (all) and cublas on qy2

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: disable MUL_MAT_ID (q2_k × f32) due to precision issues

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* docs: update s390x documentation + add faq (#14389)

* docs: update s390x documentation + add faq

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* docs: add s390x z17 build q&a

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

* metal : batch rows copy in a single threadgroup (#14384)

* metal : batch rows copy in a single threadgroup

ggml-ci

* metal : handle some edge cases when threadgroup size is not a power of 2

ggml-ci

* metal : add special-case mat-vec mul for ne00 == 4 (#14385)

ggml-ci

* llama : return mistral-v7-tekken as default template only (#14390)

* cmake: regen vulkan shaders when shaders-gen sources change (#14398)

* Add shaders-gen sources as target deps

* model : gemma3n text-only (#14400)

* gemma3n

* add llm_graph_input_one

* convert : fix broken sentencepiece vocab (#14416)

* ggml : add ggml_set_rows (#14274)

* ggml : add ggml_set_rows

Add ggml_set_rows(a, b, c) which copies rows from 'b' into 'a' using
indices from 'c'.

ref: #8366

* use I64 for indices

* ggml : add repeat impl for i64

* ggml : add ggml_is_contiguous_rows

* ggml : ggml_set_rows support broadcast

* ggml : ggml_set_rows support quantized dst

ggml-ci

* ggml : support GGML_TYPE_F32 ".from_float" trait

* ggml : ggml_set_rows update comment + better index name

* tests : add ggml_set_rows

* metal : add ggml_set_rows implementation

ggml-ci

* ggml : simplify forward_dup_f32

* ggml : fix supports_op

* tests : add comment to set_rows

* ggml : leave the repeat_i64 for a separate PR

ggml-ci

* ggml : set_rows use std::min instead of MIN

* ggml : better error message for set_rows unsupported type

* metal : perform op->type check only once

* tests : more consistent implementation + more tests

ggml-ci

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* recurrent : call balloc split_reset() in init_batch() (#14414)

ggml-ci

* graph : make llm_graph_context destructor virtual (#14410)

ggml-ci

* vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427)

This setting needs to be passed through to vulkan-shaders-gen

* ci : fix windows build and release (#14431)

* fix async_mode bug (#14432)

* model : add support for ERNIE 4.5 0.3B model (#14408)

Add Day-0 support for Baidu ERNIE 4.5 0.3B model.

Signed-off-by: Weizhao Ouyang <weizhao.ouyang@arm.com>

* vulkan: lock accesses of pinned_memory vector (#14333)

* vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378)

* CUDA: add bf16 and f32 support to cublas_mul_mat_batched (#14361)

* CUDA: add bf16 and f32 support to cublas_mul_mat_batched

* Review: add type traits and make function more generic

* Review: make check more explicit, add back comments, and fix formatting

* Review: fix formatting, remove useless type conversion, fix naming for bools

* vulkan: Add fusion support for RMS_NORM+MUL (#14366)

* vulkan: Add fusion support for RMS_NORM+MUL

- Add a use_count to ggml_tensor, so we can detect if an output is used more than once.
- Change the ggml-vulkan rms_norm shader to optionally multiply by another tensor.
- Add detection logic and basic fusion logic in ggml-vulkan.
- Add some testing support for fusion. Rather than computing one node at a time, allow
for computing the whole graph and just testing one node's results. Add rms_norm_mul tests
and enable a llama test.

* extract some common fusion logic

* fix -Winconsistent-missing-override

* move ggml_can_fuse to a common function

* build fix

* C and C++ versions of can_fuse

* move use count to the graph to avoid data races and double increments when used in multiple threads

* use hash table lookup to find node index

* change use_counts to be indexed by hash table slot

* minimize hash lookups

style fixes

* last node doesn't need single use.
fix type.
handle mul operands being swapped.

* remove redundant parameter

---------

Co-authored-by: slaren <slarengh@gmail.com>

* ggml : implement REGLU/GEGLU/SWIGLU ops (#14158)

* implement unary REGLU/GEGLU/SWIGLU cpu ops

* relax constraints

* duplicate shape of source

* fix ggml_vec_geglu_f16

* special case gated ops

* implement unary REGLU/GEGLU/SWIGLU cuda ops

* tighten constraints again

* refactor into GGML_GLU_OP

* metal : add glu kernels

ggml-ci

* add CUDA_GLU_BLOCK_SIZE [no ci]

* more constraints and use 64bit ints

ggml-ci

* 64bit multiplication [no ci]

* implement swapped variants (cpu/cuda)

* update comment [no ci]

ggml-ci

* Vulkan: Add GLU ops and shaders

* SYCL: Implement fused kernel GEGLU, SWIGLU and REGLU for single up+gate

* ggml : implement GLU for split up/gate (#14181)

* implement GLU for split up/gate

* add tests for ggml_glu_split

* Vulkan: Implement glu_split logic and shader support

* add split to logging [no ci]

* SYCL: refactor element_size ops and add split up and gate support to gated kernels

* SYCL: switch GEGLU to use tanh approximation

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Akarshan <akarshan@menlo.ai>

* GGML: increase OP count in assertion

* Refactor: Optimize SYCL element-wise operations with unary function inlining

This commit refactors the SYCL element-wise operations to improve performance by:

- Inlining unary operations (sgn, abs, elu, gelu, silu, etc.) to reduce kernel launch overhead.
- Introducing helper functions `op_xxx` for each unary operation to encapsulate the logic.
- Replacing direct kernel calls with calls to these inlined functions.
- Using `__dpct_inline__` to encourage compiler inlining.
- Minor code cleanup and consistency improvements.

The changes aim to reduce kernel launch overhead and improve the overall efficiency of element-wise operations on SYCL devices.

* vulkan: Increase workgroup size for GLU, for performance (#14345)

* vulkan: Increase workgroup size for GLU, for performance

* vulkan: change GLU shaders to do one element per invocation rather than one row per workgroup

* merge fix

* metal : add support for split and swap

ggml-ci

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Akarshan <akarshan@menlo.ai>
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

* ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (#14443)

* SYCL: disable faulty fp16 exp kernel (#14395)

* SYCL: disable faulty fp16 CPU exponent for now

* Revert "SYCL: disable faulty fp16 CPU exponent for now"

This reverts commit ed0aab1ec31b4eb4b0f275dd7acd41d96a375202.

* SYCL: disable faulty fp16 CPU exponent for now

* Fix logic of disabling exponent kernel

* server : fix appearance of the chats list context menu for Safari (#14322)

* server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (#13196)

* initial commit for handling extra template kwargs

* enable_thinking and assistant prefill cannot be enabled at the same time

* can set chat_template_kwargs in command line

* added doc

* fixed formatting

* add support for extra context in generic template init

* coding standard: common/chat.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* coding standard:  common/chat.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Apply suggestions from code review

coding standard: cosmetic changes

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix merge conflict

* chat.cpp: simplify calls to apply to ensure systematic propagation of extra_context (+ the odd existing additional_context)

* normalize environment variable name

* simplify code

* prefill cannot be used with thinking models

* compatibility with the new reasoning-budget parameter

* fix prefill for non thinking models

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Olivier Chafik <olivier.chafik@gmail.com>

* scripts : make the shell scripts cross-platform (#14341)

* cmake : Remove redundant include path in CMakeLists.txt (#14452)

* Update docker.yml

修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动

* Remove redundant include path in CMakeLists.txt

The parent directory '..' was removed from the include directories for the ggml-cpu-feats target, to avoid unnecessary include paths.

* Enable scheduled Docker image builds

Uncomments the workflow schedule to trigger daily Docker image rebuilds at 04:12 UTC, improving automation and keeping images up to date.

* test-backend-ops : disable llama test (#14461)

* ggml-cpu: sycl: Re-enable exp f16 (#14462)

* metal : disable fast-math for some cpy kernels (#14460)

* metal : disable fast-math for some cpy kernels

ggml-ci

* cont : disable for q4_1

ggml-ci

* cont : disable for iq4_nl

ggml-ci

* memory : correctly handle failure in apply() (#14438)

ggml-ci

* Add Conv2d for CPU (#14388)

* Conv2D: Add CPU version

* Half decent

* Tiled approach for F32

* remove file

* Fix tests

* Support F16 operations

* add assert about size

* Review: further formatting fixes, add assert and use CPU version of fp32->fp16

* opencl : add GEGLU, REGLU, SWIGLU (#14456)

* ggml-quants : rename best_mad to best_error (ggml/1283)

This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.

The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.

* ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285)

* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners

* sync : ggml

ggml-ci

* ggml : remove trailing whitespace (#0)

* add GELU_ERF (#14455)

* vulkan: Split large mul_mat_id to fit in shared memory (#14451)

* CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (#14411)

* [CANN]update to aclnnGroupedMatmulV2

Signed-off-by: noemotiovon <757486878@qq.com>

* Support MUL_MAT_ID on 310p

Signed-off-by: noemotiovon <757486878@qq.com>

* fix editorconfig

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>

* Add Vulkan images to docker.md (#14472)

Right now it's not easy to find those.

* ci : disable fast-math for Metal GHA CI (#14478)

* ci : disable fast-math for Metal GHA CI

ggml-ci

* cont : remove -g flag

ggml-ci

* ggml : Callback before abort (#14481)

* Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed.

* Return previous callback to allow callback chaining

* style fixes

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* github : add OpenCL backend to issue templates (#14492)

* ci : add OpenCL to labeler workflow (#14496)

* opencl : update upscale to support align corners (#14488)

* opencl : skip empty nodes on cgraph compute (#14491)

* simple-chat : fix context-exceeded condition (#14494)

* simple-chat : fix context-exceeded condition

ggml-ci

* cont : fix n_ctx_used computation

ggml-ci

* opencl : fix possible buffer overflow in dump_tensor (#14490)

* ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (#14435)

ggml-ci

* vulkan: support softmax/FA batch and broadcast (#14449)

* CUDA: broadcasting for FlashAttention mask (#14500)

* CUDA: add softmax broadcast (#14475)

* CUDA: add softmax broadcast

* Pass by const ref

* Review: Use blockDims for indexing, remove designated initializers

* Add TODO for noncontigous input/output

* Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (#14309)

* ggml : add version function to get lib version (ggml/1286)

* ggml : add version function to get lib version

This commit adds a function `ggml_version()` to the ggml library that
returns the version of the library as a string.

The motivation for this is that it can be useful to be able to
programmatically check the version of the ggml library being used.

Usage:
```c
printf("GGML version: %s\n", ggml_version());
```
Output:
```console
GGML version: 0.0.2219
```

* ggml : add ggml_commit()

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* sync : ggml

ggml-ci

* llama : initial Mamba-2 support (#9126)

* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size

* gguf-py : add support for chat template jinja files (#14508)

* add support for chat template jinja files

* remove gemma3n hack

* CUDA: add dynamic shared mem to softmax, refactor general usage (#14497)

* ggml : remove kompute backend (#14501)

ggml-ci

* ggml : fix FA mask dim 2 and 3 (#14505)

* ggml : fix FA mask dim 2 and 3

ggml-ci

* backends : unsupport batched FA in CUDA and Vulkan

ggml-ci

* vulkan : disable FA for mask->ne[2] != 1

* kv-cache : use ggml_set_rows (#14285)

* kv-cache : use ggml_set_rows

ggml-ci

* graph : separate k and v indices

ggml-ci

* cont : remove redundant ifs

ggml-ci

* kv-cache : improve find_slot impl

* kv-cache : bounds-check when accessing slot_info indices

* kv-cache : add comments

ggml-ci

* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends

ggml-ci

* convert : correct gemma 3n conversion (#14450)

* convert : correct gemma 3n conversion

* rm redundant code

* Fix conditional enabling following arch checks for ggml-sycl (#14504)

Signed-off-by: nscipione <nicolo.scipione@codeplay.com>

* ggml: backward pass for split swiglu (#14483)

* vulkan: support mixed/deepseekR1 FA head sizes (#14509)

* vulkan: better parameterize FA by head sizes

* vulkan: support mixed/deepseekR1 FA head sizes

* opencl : broadcast for soft_max (#14510)

* ggml : implement GEGLU_ERF and GEGLU_QUICK ops (#14445)

* CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (#14002)

Co-authored-by: luyuhong <luyuhong@kylinos.cn>

* batch : add n_used count (#14512)

ggml-ci

* graph : prepare for 4D mask (#14515)

ggml-ci

* batch : add optional for sequential equal split (#14511)

ggml-ci

* metal : disable fast math in all quantize kernels (#14528)

ggml-ci

* test-backend-ops: add support for specifying output format (#14368)

* test-backend-ops: add support for specifying output format

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Add build_commit and build_number in test_result

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* refactor

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Get build commit from ggml_commit()

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Merge errors into test_operation_info && address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* remove visitor nonsense

* remove visitor comment

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

* Address review comments

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>

---------

Signed-off-by: Xiaodong Ye <yeahdongcn@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>

* eval-callback : check for empty input (#14539)

* opencl: add GELU_ERF (#14476)

* server : fix assistant prefilling when content is an array (#14360)

* vulkan: Handle updated FA dim2/3 definition (#14518)

* vulkan: Handle updated FA dim2/3 definition

Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.

* handle null mask for gqa

* allow gqa with dim3>1

* vulkan: fix rms_norm+mul fusion (#14545)

The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.

* vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (#14485)

Commit taken from remyoudompheng's PR https://github.com/ggml-org/llama.cpp/pull/12260

Co-authored-by: Rémy Oudompheng <remyoudompheng@gmail.com>

* CUDA: add bf16 and i32 to getrows (#14529)

* llama : remove ggml_cont where possible (#14568)

* llama : fix incorrect minicpm3 v_states shape (#14571)

* musa: fix build warnings (unused variable) (#14561)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* CUDA: add bilinear interpolation for upscale (#14563)

* cuda : fix rope with partial rotation and non-cont src (#14580)

* cuda : fix rope non-cont

ggml-ci

* cont : fix multi-rope + add test

ggml-ci

* sycl : try fix

ggml-ci

* cont : fix sycl + clean-up cuda

ggml-ci

* vulkan: increase timeout for CI (#14574)

* model : add hunyuan moe (#14425)

* model : add hunyuan moe

* tokenizer ok

* fix tensor name

* cgraph init

* chat template

* wip

* almost working

* skip embed, fix bos

* cleanup

* yarn scaling

* cleanup

* correct rope type

* failed token fix

* ntk alpha freq_base

* tokenization working

* cleanup and pr changes

* vocab_size sanity check

* ntk alpha generic

* Update convert_hf_to_gguf.py

* Apply suggestions from code review

* fix regression

* fix style

---------

Co-authored-by: kooshi <1934337+kooshi@users.noreply.github.com>

* server: Add ability to mount server at prefix (#14544)

* Add server_prefix

* Correct server path env

* Rename cli flag to --api-prefix

* Change all to api_prefix

* vulkan : fix rope with partial rotation and non-cont src (#14582)

* memory : fix broken batch splits for recurrent cache (#14575)

Splits producing more than one ubatch per batch for recurrent models
were broken with #14512.

This fixes it by moving the completeness check after the ubatch split loop.

* model : add SmolLM3 (#14581)

* Init - first pass.

* Model -> ModelBase.

* fix errors in conversion.

* Update the graph.

* up.

* up.

* wip

* cgraph ok

* rm redundant code

---------

Co-authored-by: Vaibhavs10 <vaibhavs10@gmail.com>

* model : fix hunyuan moe chat template (#14584)

Signed-off-by: stevenkuang <stevenkuang@tencent.com>

* vulkan: optimize flash attention split_k_reduce (#14554)

* vulkan: allow FA split_k with smaller KV values

* vulkan: spread split_k_reduce work across more threads

k_num can get rather large. Use the whole workgroup to reduce the M/L values.

Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).

* convert : fix smollm3 jinja template (#14586)

* model : add support for Falcon-H1 family (#14534)

* v1

* push more fixes

* another fix

* fix

* more fixes

* minor fix

* more cleaning on python code

* python fixes

* changed precision for multipliers float 32->64

* fixes

* another fix

* fix

* pre-norm -> norm

* fix

* Revert "fix"

This reverts commit 243e4d1a50bd73467d99f6b289b9a1826f83b94b.

* fix

* small fix ffn_norm

* try

* mix instead of max

* fix vocab size

* conflict solve

* fixed multipliers

* falcon-h1 specefic vocab resolved

* read arch from gguf.MODEL_ARCH

* mamba_d_ssm added to d_inner find_hparam

* remove unused functions from gguf_writer.py

* override modify_tensors instead of get_tensors

* fix conversion and d_inner

* added some cb functions for debugging puposes

* inp_out_ids moved outside of layers loop

* mup_vec create as float64

* fix rope_theta

* injected mup

* clean ups

* rm extra space

* rm unused MAMBA_CHUNK_SIZE

* rm unused key

* add bos False

* changed ROPE_TYPE

* cleaning debugging stuff

* cleaning debug quant

* fix comment

* some cleanups

* some cleanups

* Update src/llama-model-loader.cpp

* more cleanups

* moe cleanuips

* d_ssm -> d_inner;

* cleaning unused hparams

* cleanup

* more cleanups

* more cleanups on python conversion;

* minor cleanups

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* remove todo

* added falcon-h1

* tensor not required

* clean

* remove unneeded attributes

* more cleanups and fixed conversion

* remove final_norm

* flake8 fixes

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* flake8 fixes

* Update src/llama-hparams.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-arch.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* added hashes

* Update src/llama-arch.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update src/llama-vocab.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update the update file

* Revert "update the update file"

This reverts commit 082ab4ad2a3927384d878666a5f8cae4eb15f577.

* fix: address suggestions

* fix: update convert_hf_to_gguf.py

* Update gguf-py/gguf/constants.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model-loader.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* d_inner fixed

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* reshaping ssm_norm for 34B

* removing generate_mup

* remove duplicates metadata keys

* rm comment

* final comment

* fix unused args

* fix constants

* fix bad merge

* Update src/llama-model.cpp

Co-authored-by: compilade <git@compilade.net>

* falcon-h1: remove unused ssm_in_b and bad merge

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* falcon-h1: fix last comment

* Update convert_hf_to_gguf.py

Co-authored-by: compilade <git@compilade.net>

* falcon-h1: revert add_add_bos(False)

* falcon-h1: fix tied weights

* falcon-h1: remove whitespace

* falcon-h1: fix wrong size param

* falcon-h1: fix whitespace issues

---------

Co-authored-by: younesbelkada <younes.belkada@tii.ae>
Co-authored-by: Younes B <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: compilade <git@compilade.net>

* llama : remove unintended whitespace (#14592)

* model : add skt/A.X-4.0 model vocabulary (#14589)

* ggml : prevent integer overflow in gguf tensor size calculation (#14595)

* ggml : add ggml_scale_bias (#14417)

* ggml : add ggml_scale_bias

* ggml_vec_mad1_f32

* add more simd

* add CUDA

* sycl

* vulkan

* cann (placeholder)

* opencl

* will this fix cpu?

* fix cuda

* suggestions from coderabbit

* fix cann compile error

* vDSP_vsmsa

* rm __ARM_FEATURE_SVE

* use memcpy for op params

* make code looks more consistent

* use scalar for __ARM_FEATURE_SVE

* add x param to ggml_vec_mad1_f32

* llama : support Jamba hybrid Transformer-Mamba models (#7531)

* wip: llama : separate recurrent states from the KV cache

This will be necessary to support Jamba
(and other recurrent models mixed with Attention).

Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.

* llama : use std::find for seq_nodes in llama_rs_cache

* llama : state checkpoints for recurrent models

* llama : correctly handle more edge cases for the rs cache

* llama : rename many llama_kv_cache_* functions

* llama : remove useless return value for some llama_cache_* functions

* llama : rethink recurrent state cell counts

* llama : begin work on support for variable GQA

This will also be useful for Jamba if we consider the Mamba layers
to have 0 KV heads.

* llama : gracefully fail when not finding hybrid slot

* llama : support Jamba

* llama : fix BERT inference without KV cache

* convert-hf : check for unprocessed Jamba experts

* convert-hf : support Mini-Jamba conversion

* llama : fix Jamba quantization sanity checks

* llama : sequence-length-aware batch splitting

* llama : use equal-sequence-length sub-batches for recurrent models

* ggml : simplify SSM-related operators

* llama : make recurrent state slot allocation contiguous

* llama : adapt internal uses of batches to llama_ubatch

* llama : fix batch split output count for embeddings

* llama : minimize swaps when reordering logits

This reduces overhead when running hellaswag
on thousands of sequences with very small 100k params Mamba models.

* llama : fix edge case finding batch seq_id of split recurrent cell

This otherwise was a problem when running the HellaSwag benchmark
with small batch sizes, making it crash.

* llama : avoid copies for simple batch splits

* ggml : make ggml_ssm_scan not modify its source tensors

* llama : fix shared recurrent tail cell count for small ubatch sizes

Otherwise it was impossible to run the 'parallel' example with '-ub 1'
with a Mamba or Jamba model.

* llama : fix .base() compilation error on Windows

* llama : allow doing the equivalent of SSM_CONV with SUM_ROWS and MUL

* ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors

The implementation already supported it,
and this makes Mamba's conv step slightly faster.

* mamba : fix non-contiguous usage of ggml_silu

* llama : session saving and reloading for hybrid models

* convert_hf : fix Jamba conversion

* llama : fix mixed signedness comparison

* llama : use unused n_embd_k_gqa in k_shift

This also slightly reduces the diff from the master branch

* llama : begin renaming llama_past back to llama_kv_cache

* llama : remove implicit recurrent state rollbacks

* llama : partially apply clang-format style

* convert : fix jamba conv1d shape squeezing

* graph : add back hybrid memory graph input

But this time it contains the sub-cache graph inputs.
This *should* make it easier to handle updating the inputs
when caching the graph (eventually).

* model : add Jamba to Mamba-specific hparams printing

* jamba : remove redundant nullptr initializations

* model : remove unnecessary prefix for tensor loading constants

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model : use ggml_swiglu_split for Mamba

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model : make falcon-h1 use shared mamba2 layer builder

* memory : avoid referring to KV in recurrent cache logs

* gguf-py : avoid adding duplicate tensor mappings for Jamba

Some of the tensor names are common with Llama4

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* llama : remove llm_graph_input_one (#14603)

* cuda : support Falcon-H1 state size for SSM_SCAN (#14602)

* cmake : llguidance build parser library only (#14608)

* cmake : bump llguidance version to v1.0.1 (#14609)

* llama : minor coding style fix for smollm3 (#14605)

* SYCL: Initial set_rows kernel implementation (#14562)

* SYCL: Initial set_rows kernel implementation

* Revert max_threads to 256

* Refactor set_rows and address review comments

* Deduplicate conversion function

* Remove guard before kernel launch and refactor

* Fix and add back SFINAE

* cmake : do not search for curl libraries by ourselves (#14613)

* cmake : do not search for curl libraries by ourselves

* run : do not search for curl libraries by ourselves

* Docs: script to auto-generate ggml operations docs (#14598)

* Docs: script to auto-generate ggml operations docs

* Review: formatting changes + change github action

* Use built-in types instead of typing

* docs : add BLAS and Metal ops

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Smoldocling support (#14597)

* support for smoldocling

* fixed merge conflicts

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Gabe Goodhart <gabe.l.hart@gmail.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Gabe Goodhart <gabe.l.hart@gmail.com>

* merge conflicts

* pre tokenizer merge fix

* convert : fix smollm3 jinja template (#14586)

Signed-off-by: ryan-mangeno <ryanmangeno@gmail.com>

* support for smoldocling

Signed-off-by: ryan-mangeno <ryanmangeno@gmail.com>

* fixed merge conflicts

Signed-off-by: ryan-mangeno <ryanmangeno@gmail.com>

* Update src/llama-vocab.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.h

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* safetensors tensor mapping

Signed-off-by: ryan-mangeno <ryanmangeno@gmail.com>

* added back accidental removal of clean spaces for hunyuan

* Update src/llama-vocab.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* updated hash and reordererd model list

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-vocab.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update include/llama.h

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf_update.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-vocab.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* removed old tensor name

* removed tensor mappings -> handled by smolvlm

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Signed-off-by: ryan-mangeno <ryanmangeno@gmail.com>
Co-authored-by: Gabe Goodhart <gabe.l.hart@gmail.com>
Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: compilade <git@compilade.net>

* opencl: add `set_rows` for `f16` and `f32` (#14547)

* opencl: add `set_rows` for `f16` and `f32`

* opencl: better choose workgroup size for `set_rows`

* opencl: add tiled mul_mat_f16_f32 (#14535)

* add tiled mul_mat_f16_f32

* fix trailing whitespace

* add insightful comments

* model : Granite Four (#13550)

* wip: llama : separate recurrent states from the KV cache

This will be necessary to support Jamba
(and other recurrent models mixed with Attention).

Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.

* llama : use std::find for seq_nodes in llama_rs_cache

* llama : state checkpoints for recurrent models

* llama : correctly handle more edge cases for the rs cache

* llama : rename many llama_kv_cache_* functions

* llama : remove useless return value for some llama_cache_* functions

* llama : rethink recurrent state cell counts

* llama : begin work on support for variable GQA

This will also be useful for Jamba if we consider the Mamba layers
to have 0 KV heads.

* llama : gracefully fail when not finding hybrid slot

* llama : support Jamba

* llama : fix BERT inference without KV cache

* convert-hf : check for unprocessed Jamba experts

* convert-hf : support Mini-Jamba conversion

* llama : fix Jamba quantization sanity checks

* llama : sequence-length-aware batch splitting

* llama : use equal-sequence-length sub-batches for recurrent models

* ggml : simplify SSM-related operators

* llama : make recurrent state slot allocation contiguous

* llama : adapt internal uses of batches to llama_ubatch

* llama : fix batch split output count for embeddings

* llama : minimize swaps when reordering logits

This reduces overhead when running hellaswag
on thousands of sequences with very small 100k params Mamba models.

* llama : fix edge case finding batch seq_id of split recurrent cell

This otherwise was a problem when running the HellaSwag benchmark
with small batch sizes, making it crash.

* llama : avoid copies for simple batch splits

* llama : use im2col and mul_mat to perform convolution for Mamba

This removes the need for ggml_ssm_conv!!!
But performance seems slighly worse on my system,
especially for prompt processing.
Maybe ggml_mul_mat isn't optimized for small row sizes?
More performance testing is necessary until GGML_OP_SSM_CONV is removed.

* ggml : make ggml_ssm_scan not modify its source tensors

* llama : fix shared recurrent tail cell count for small ubatch sizes

Otherwise it was impossible to run the 'parallel' example with '-ub 1'
with a Mamba or Jamba model.

* llama : fix .base() compilation error on Windows

* llama : allow doing the equivalent of SSM_CONV with SUM_ROWS and MUL

* ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors

The implementation already supported it,
and this makes Mamba's conv step slightly faster.

* llama : rename llama_cache to llama_past

This can be changed back later if the name change is wrong.
I was renaming the functions anyway to generalize kv-cache-related
functions to hybrid and recurrent model architectures.
I think llama_past is a better name than llama_cache for a combined
kv cache and recurrent state cache, because the states it contains
pretty much always come before the newly-added ones for any particular
sequence. Also 'llama_past_clear' sounds more obvious in what it does
than 'llama_kv_cache_clear'. The future is what the models generate.
(For embeddings, the kv cache isn't really used anyway)

Still, I'm open to better suggestions.

* examples : replace llama_kv_cache_seq_* with llama_past_seq_*

* mamba : fix non-contiguous usage of ggml_silu

* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : session saving and reloading for hybrid models

* convert_hf : fix Jamba conversion

* llama : fix mixed signedness comparison

* llama : use unused n_embd_k_gqa in k_shift

This also slightly reduces the diff from the master branch

* llama : begin renaming llama_past back to llama_kv_cache

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* llama : remove implicit recurrent state rollbacks

* llama : partially apply clang-format style

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* feat: Add conversion for Bamba models

This is borrowed and adapted from the original implementation
https://github.com/ggml-org/llama.cpp/pull/10810

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add Granite 4 conversion

This is a manual copy from my draft branch
https://github.com/gabe-l-hart/llama.cpp/blob/GraniteFourDraft/convert_hf_to_gguf.py#L5076

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Plumb bamba through llama-arch

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add bamba to llama_arch_is_hybrid_recurrent

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add optional mamba ssm_in bias tensor

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add template specialization for get_arr to load a vector<uint32_t> for layer index arr in hparams

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Use an explicit bool to determine mamaba vs mamba2

This allows other architectures like bamba and granitemoehybrid to use
mamab2 without a growing architecture `if` statement inside the mamba
implementation.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Isolate mamba(2) and granite attention layer building in static methods

This will allow these layer-builder methods to be used from other build
structs without complex inheritance.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Use per-layer sizes in granite build_attention_layer

Also no need to pass in kv cache since it's already in the inp_attn

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: First (broken) pass at end-to-end Bamba implementation

It generates (garbage) tokens! Still lots of debugging to do.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Only do Granite multipliers if set

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Pull granite ffn portion into a static function and reuse in hybrid

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat(py): Allow gguf duplicate keys if they match by value and type

This is helpful for hybrid models that want to do gguf param setting by
calling multiple parent classes without needing to make those parent
classes try/except on every attempt to set a gguf value.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor(py): Simplify granitemoehybrid conversion to use parents better

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add GRANITE_MOE_HYBRID through llama-arch

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Support GRANITE_MOE_HYBRID in llama-model

This re-uses the Bamba code paths heavily and simply adds the missing parts
for loading MoE and the shared expert.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* style: Fix flake8 errors

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Fix recurrent cache get after rebase

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Fix hybrid granite implementation for signature changes in build_mamba*_layer

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Refactor relationship between non-hybrid classes and hybrid impl to use mixins

The challenge here is to give both the non-hybrid classes (llm_build_mamba
and llm_build_granite) AND the hybrid class (llm_build_hybrid_mamba) access
to the same intermediate "base class" functionality (build_mamba*_layer,
build_granite_attention_layer) without running into trouble with diamond
inheritance of llm_graph_context. Due to the non-trivial initialization
that happens in llm_graph_context, diamond inheritance results in multiple
initializations of the common base which cause problems around the unique
ptrs. I wanted to get away from `self->` everywhere, but this is still a
bit cleaner than making those methods static I think.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Implement the full copy-paste version to duplicate the layer builders

This follows the pattern where the type of input is pinned to the type of
memory and that is used to dispatch to the correct version of `build_rs` /
`build_attn`. There's a lot of code duplication that can hopefully be
pulled into common functions in the graph later.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Rename llm_build_hybrid_mamba -> llm_build_granite_hybrid

I've got back-and-forth a lot about how/if to try to implement reuse of the
"child model" layer types for hybrid models. At the end of the day, I think
hybrid models are their own beast and even if their layers are inspired by
other models, they should maintain control of their own layer building (in
other words, the copy-paste method). Given that, the name should reflect
that this is not a generic hybrid model builder, but rather a granite-
specific hybrid model builder that can do MoE (granite 4) or dense (bamba).

As part if this, I also cleaned up dangling comments from previous attempts
at using static methods for reusability.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* memory : correctly handle failure in apply()

ggml-ci

* style: Remove TODO for adding first hybrid models to the switch

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Fix bad merge in tensor_mapping.py w/ SSM_NORM

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Fix bad merge resolution with variable renames/moves in llm_build_mamba

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* docs: Fix comment about duplicate key check

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Conform to standard way of initializing inp_out_ids

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* convert : fix jamba conv1d shape squeezing

* fix: Fix input initialization in granite_hybrid after removal of hybrid inputs

Branch: GraniteFourWithJamba

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Use llm_graph_context_mamba in llm_build_granite_hybrid

Branch: GraniteFourWithJamba

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Refactor mamba2/granite/jamba/granite_hybrid relationships as mixins

The key is for the mixin classes (llm_graph_context_mamba,
llm_graph_context_granite) to use virtual inheritance from
llm_graph_context. This allows the common members to exist only once in the
class hierarchy. The downside is that llm_graph_context will be
re-initialized once for each parent (ie 2x for single mixin, 3x for two
mixins, etc...).

Branch: GraniteFourWithJamba

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* graph : add back hybrid memory graph input

But this time it contains the sub-cache graph inputs.
This *should* make it easier to handle updating the inputs
when caching the graph (eventually).

* model : add Jamba to Mamba-specific hparams printing

* fix: Fix input setup after upstream merge

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* jamba : remove redundant nullptr initializations

* model : remove unnecessary prefix for tensor loading constants

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model : use ggml_swiglu_split for Mamba

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* feat: Add support for dense FFN in GraniteMoeHybrid

This was already partially supported via reusing the granite ffn builder,
and there may be models that leverage this architecture going forward. The
naming is a bit odd, but in the transformers version, it reuses the same
model class and simply has zero regular experts and a single shared expert
(which is the same as a single dense FFN).

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Add support for dense FFN tensor names on c++ side

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Use child inputs for Falcon H1 after merge resolution

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove unnecessary prefix on tensor constants

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model : make falcon-h1 use shared mamba2 layer builder

* memory : avoid referring to KV in recurrent cache logs

* fix: Revert order changes for Falcon H1 to stay consistent with upstream

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* gguf-py : avoid adding duplicate tensor mappings for Jamba

Some of the tensor names are common with Llama4

* refactor: Collapse Bamba and GraniteMoeHybrid into GraniteHybrid

The only key difference is the use of rope which is now set via
rope_finetuned in the hparams

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Remove use of diamond inheritance

Per PR discussion, it's simpler to keep this with basic inheritance and not
introduce the complexity of virtual inheritance and multiple inheritance

https://github.com/ggml-org/llama.cpp/pull/13550#issuecomment-3053787556

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Log mamba params for Granite Hybrid

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove unused ssm_in_b

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* refactor: Remove ATTENTION_LAYER_INDICES hparam in favor of n_head_kv

This matches how recurrent vs attention heads are identified for Jamba

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove unused template expansion for get_arr

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Review cleanup in convert_hf_to_gguf

The gist is to be explicit about which base class is being used with the
multiple inheritance setup

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Undo hidden warnings about duplicate identical keys in add_key_value

After further discussion, this encourages sloppy overwriting in the model
converters

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: If not using ROPE, context is "infinite"

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* doc: Add a comment outlining expected duplicate key warnings

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Remove unnecessary duplicate keys in converter

Co-authored-by: Francis Couture-Harpin <git@compilade.net>

(thanks for the sharp eyes and patience!)

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Francis Couture-Harpin <git@compilade.net>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* vocab : add midm-2.0 model pre-tokenizer (#14626)

* llama : move enum llama_vocab_pre_type to implementation (#14631)

ggml-ci

* readme : add hot PRs (#14636)

* readme : add hot PRs

* cont

* readme : update title

* readme : hot PRs links

* cont

* HIP : Add HIP 7.0+ compatibility for hipBLAS compute types (#14634)

* model : support LiquidAI LFM2 hybrid family (#14620)

**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
```

* vulkan: optimizations for deepseek prompt processing (#14555)

* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader

* vulkan: increase coopmat2 mul_mat_id tile size

* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path

* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)

* vulkan: support SET_ROWS (#14587)

* vulkan: support SET_ROWS

Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.

* vulkan: optimize set_rows

Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.

* server : fix pooled embedding output (#14645)

* vulkan : implement ggml_roll (ggml/1290)

ggml-ci

* vulkan : implement bilinear interpolation (ggml/1291)

ggml-ci

* sync : ggml

ggml-ci

* vulkan : remove unused vars (#0)

ggml-ci

* sync : ggml

* CUDA: add set rows for f32 and f16 (#14551)

* CUDA: add set rows for f32 and f16

* Review: change kernel params, use strides from host

* Use 1-d kernel

* Review: use int64_t for blockDim.x, rename nb->s for clarity

* docs : add LFM2 to models section (#14650)

* readme : add LFM2 to models section

* fix copy paste...

* tests : cover lfm2 cases in test_ssm_conv (#14651)

* cmake : Add CMake presets for Linux and GCC (#14656)

* metal : Add missing unary ops Metal support (#14660)

* ggml : add build-time message to remind about ggml_set_rows (#14661)

ggml-ci

* cuda : add ELU support (#14657)

* cuda : add set rows for bf16 (#14664)

* quantize : fix minor logic flaw in --tensor-type (#14572)

* llama : add jinja template for rwkv-world (#14665)

* llama : add jinja template for rwkv-world

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* sycl: Batched mulmat rework for oneDNN dispatch (#14617)

* SY…
QVAC-4552: Sync port with upstream version b5932
* [common] Pure interface for files

Convert llama_file to a pure virtual class that can be overriden by multiple implementations (disk, single memory buffer, ...)

* [common] Compile time debug logs

Define a new macro LLAMA_LOG_CMAKE_DEBUG that results in no-op when a release build is activated. This will allow to have a good trace and debugging capabilities that will be specially useful for the async loading of multiple model shards.

* [aux] Test full load from disk

This change adds an additional automated test loading from disk, to ensure the existing functionallity does not break.

* [aux] GGUF split summary

The gguf-split utility now generates a `.txt` listing all tensors. Useful both for manual inspection/debugging and for incremental tensor loading where its not possible to know tensors present in other split files (the information is critical to handle optional tensors).

* [aux] gguf tensor must be followed

Add a flag to the tool to ensure some tensor names are always followed by another tensor and not at the end of a shard. This ensures the shard will not be released when the tensor is processed, and avoid missing-file failures of duplicate tensors that are re-referenced a few tensors later (typically token_embd.weight / output).

* [aux] verbose gguf split

Show to which shards belongs each tensor

* [common] Stream buffer for uint8 data

- Ensures a char trait implementation for uint8 exists, that can be used with std::basic_streambuff.
- Adds an implementation of std::basic_streambuff for a single vector. Will be used by llama.cpp and tests when loading from a single memory buffer.

* [mbuffer] Llama file buffer implementation

Override the pure virtual interface with a class that can operate on a single memory buffer.

* [refactor] C splits into C++

Auxiliary function to convert a list of C strings to a vector of C++ strings.

* [common] GGUF reader from memory

Add new GGUF reader implementation that can read metadata from a memory buffer.

* [refactor][mbuffer] File load from variant

- Add code to be able to load a gguf file from a variant (memory or disk).
- Some structs simplify how to load a file and keep track of the pointers (which are now in the same struct).

* [refactor] Process file method

Move the loader code, that process a file after it has been loaded into memory and populate its own attributes, to a reusable method.

* [mbuffer] Expose single-buffer loading to Llama interface

Add new C++ function to Llama main header to load from a single memory buffer, and propagate changes to internal calls/constructors.

* [fbuffers] Future file buffer implementation

A file buffer that can be fulfilled using string keys. The extract method waits until the file is provided.

* [fbuffers] Incremental loading of future files

Handles the logic for incrementally loading files and tensors is model shards.

* [refactor] Create backend buffers

Refactor backend buffer creation (for model loading) into functions.

* [refactor] Load all data

- The function now takes size_data instead of the member attribute.
- Sanity checks of file pointer handles

These two changes will be useful when calling `load_all_data` multiple times during incremental shard load.

* [fbuffers] Incremental model load

Adapt the loader and model load to incrementally load files and upload tensors.

* [fbuffers] Expose async interface

Add functions to Llama.cpp public headers to asynchronously load shards.

* [refactor] Increase common loading granularity

Split some common loading functionallity. This will help in the memory loading tests.

* [aux] Common test

Add a submodule with re-usable code for tests.

* [aux] Memory example (embedding)

Adapt embedding example to showcase how to load from memory. Can be configured through environment variables.

* [aux] Memory example (simple)

Adapt simple example to showcase how to load from memory. Can be configured with environment variables.

Qwen3, for example, can be used with the simple example.

* [aux] Auto. memory loading tests

Add some automatic tests that load from memory (single buffer or multiple async splits)
Add approval-check-worker workflow
* [common] Pure interface for files

Convert llama_file to a pure virtual class that can be overriden by multiple implementations (disk, single memory buffer, ...)

* [common] Compile time debug logs

Define a new macro LLAMA_LOG_CMAKE_DEBUG that results in no-op when a release build is activated. This will allow to have a good trace and debugging capabilities that will be specially useful for the async loading of multiple model shards.

* [aux] Test full load from disk

This change adds an additional automated test loading from disk, to ensure the existing functionallity does not break.

* [aux] GGUF split summary

The gguf-split utility now generates a `.txt` listing all tensors. Useful both for manual inspection/debugging and for incremental tensor loading where its not possible to know tensors present in other split files (the information is critical to handle optional tensors).

* [aux] gguf tensor must be followed

Add a flag to the tool to ensure some tensor names are always followed by another tensor and not at the end of a shard. This ensures the shard will not be released when the tensor is processed, and avoid missing-file failures of duplicate tensors that are re-referenced a few tensors later (typically token_embd.weight / output).

* [aux] verbose gguf split

Show to which shards belongs each tensor

* [common] Stream buffer for uint8 data

- Ensures a char trait implementation for uint8 exists, that can be used with std::basic_streambuff.
- Adds an implementation of std::basic_streambuff for a single vector. Will be used by llama.cpp and tests when loading from a single memory buffer.

* [mbuffer] Llama file buffer implementation

Override the pure virtual interface with a class that can operate on a single memory buffer.

* [refactor] C splits into C++

Auxiliary function to convert a list of C strings to a vector of C++ strings.

* [common] GGUF reader from memory

Add new GGUF reader implementation that can read metadata from a memory buffer.

* [refactor][mbuffer] File load from variant

- Add code to be able to load a gguf file from a variant (memory or disk).
- Some structs simplify how to load a file and keep track of the pointers (which are now in the same struct).

* [refactor] Process file method

Move the loader code, that process a file after it has been loaded into memory and populate its own attributes, to a reusable method.

* [mbuffer] Expose single-buffer loading to Llama interface

Add new C++ function to Llama main header to load from a single memory buffer, and propagate changes to internal calls/constructors.

* [fbuffers] Future file buffer implementation

A file buffer that can be fulfilled using string keys. The extract method waits until the file is provided.

* [fbuffers] Incremental loading of future files

Handles the logic for incrementally loading files and tensors is model shards.

* [refactor] Create backend buffers

Refactor backend buffer creation (for model loading) into functions.

* [refactor] Load all data

- The function now takes size_data instead of the member attribute.
- Sanity checks of file pointer handles

These two changes will be useful when calling `load_all_data` multiple times during incremental shard load.

* [fbuffers] Incremental model load

Adapt the loader and model load to incrementally load files and upload tensors.

* [fbuffers] Expose async interface

Add functions to Llama.cpp public headers to asynchronously load shards.

* [refactor] Increase common loading granularity

Split some common loading functionallity. This will help in the memory loading tests.

* [aux] Common test

Add a submodule with re-usable code for tests.

* [aux] Memory example (embedding)

Adapt embedding example to showcase how to load from memory. Can be configured through environment variables.

* [aux] Memory example (simple)

Adapt simple example to showcase how to load from memory. Can be configured with environment variables.

Qwen3, for example, can be used with the simple example.

* [aux] Auto. memory loading tests

Add some automatic tests that load from memory (single buffer or multiple async splits)
diff --git a/scripts/tune/tune.py b/scripts/tune/tune.py
new file mode 100644
index 000000000..eff17d3
--- /dev/null
+++ b/scripts/tune/tune.py
@@ -0,0 +1,253 @@
+#!/usr/bin/env python3
+"""
+Optimize runtime parameters for llama-simple binary using eval time measurements.
+Usage: python tune_tps.py --model /path/to/model.gguf
+"""
+import os
+import time
+import argparse
+from functools import partial
+
+import numpy as np
+# pip install scikit-optimize
+from skopt import gp_minimize, expected_minimum
+from skopt.plots import plot_objective, plot_convergence
+from skopt.space import Categorical
+import matplotlib.pyplot as plt
+import json
+
+BAD_CONFIGURATIONS = []
+
+# Progress tracking global variables
+progress_start_time = None
+progress_current_call = 0
+progress_total_calls = 0
+progress_best_score = float('inf')
+
+def display_progress():
+    """Display current optimization progress with time estimates."""
+    global progress_start_time, progress_current_call, progress_total_calls, progress_best_score
+
+    if progress_start_time is None:
+        return
+
+    elapsed_time = time.time() - progress_start_time
+    if progress_current_call > -1:
+        avg_time_per_call = elapsed_time / progress_current_call
+        remaining_calls = progress_total_calls - progress_current_call
+        estimated_remaining_time = avg_time_per_call * remaining_calls
+
+        progress_percent = (progress_current_call / progress_total_calls) * 100
+
+        print(f"\n{'='*60}")
+        print(f"OPTIMIZATION PROGRESS")
+        print(f"{'='*60}")
+        print(f"Iteration: {progress_current_call}/{progress_total_calls} ({progress_percent:.1f}%)")
+        print(f"Elapsed time: {elapsed_time:.1f}s")
+        print(f"Est. remaining time: {estimated_remaining_time:.1f}s")
+        print(f"Best metric so far: {progress_best_score:.4f}")
+        print(f"{'='*60}\n")
+
+def run_iterations(get_opts_fn, run_binary_fn, run_options, model_path, binary_path="./build/bin/llama-cli", iterations=1):
+    """Run llama-siple with specified options and return eval time."""
+    try:
+        run_options_str = get_opts_fn(run_options, model_path, binary_path)
+        print(run_options_str)
+
+        results = []
+
+        # Run the test (can increase iterations for more stable results)
+        for _ in range(iterations):
+            results.append(run_binary_fn(run_options_str))
+
+        # Return eval time as the objective (we want to minimize this)
+        return np.mean(results)
+
+    except Exception as e:
+        BAD_CONFIGURATIONS.append(run_options)
+        print("ERROR:", e, run_options)
+        print("BAD_CONFIGURATIONS:", BAD_CONFIGURATIONS)
+        return 1000  # High penalty for failed runs
+
+
+def optimize_runtime_with_progress(x, get_opts_fn, run_binary_fn, run_options_list, model_path, llama_simple_path):
+    """Objective function for optimization with progress tracking."""
+    global progress_current_call, progress_best_score
+
+    progress_current_call += 1
+
+    run_options = {
+        run_options_list[i][0]: run_options_list[i][1][run_options_list[i][1].index(x[i])]
+        for i in range(len(run_options_list))
+    }
+
+    result = run_iterations(get_opts_fn, run_binary_fn, run_options, model_path, llama_simple_path)
+
+    # Update best score
+    if result < progress_best_score:
+        progress_best_score = result
+
+    # Display progress every call
+    display_progress()
+
+    return result
+
+
+def load_cache(cache_filename):
+    """Load cached optimization results."""
+    try:
+        with open(cache_filename, "r") as cache_file:
+            cache_data = json.load(cache_file)
+            return cache_data["x0"], cache_data["y0"]
+    except:
+        pass
+    return None, None
+
+
+def save_cache(cache_filename, x0, y0):
+    """Save optimization results to cache."""
+    # Convert numpy int64 objects to Python int objects
+    x0 = [[int(item) if isinstance(item, np.int64) else item for item in sublist] for sublist in x0]
+    y0 = [int(item) if isinstance(item, np.int64) else item for item in y0]
+
+    cache_data = {"x0": x0, "y0": y0}
+    with open(cache_filename, "w") as cache_file:
+        json.dump(cache_data, cache_file)
+
+
+def plot_iterations(result):
+    """Plot optimization iterations."""
+    search_space = result.space
+    x_iters = result.x_iters
+    func_vals = result.func_vals
+    search_space_names = [dim.name for dim in search_space]
+    opts = search_space_names + ["objective_r"]
+
+    num_params = len(opts) + 1
+    fig, axs = plt.subplots(num_params, figsize=(8, num_params * 8), sharex=True)
+    iterations = list(range(1, len(x_iters) + 1))
+
+    for i, param in enumerate(opts):
+        if param == "objective_r":
+            param_values = func_vals
+        else:
+            param_index = search_space_names.index(param)
+            param_values = [x[param_index] for x in x_iters]
+
+        axs[i].scatter(iterations, param_values)
+        axs[i].set_xlabel("Iteration")
+        axs[i].set_ylabel(param)
+
+    plot_convergence(result, true_minimum=0, ax=axs[-1])
+    return axs
+
+def parse_args(default_bin):
+    parser = argparse.ArgumentParser(description='Optimize llama-simple runtime parameters')
+    parser.add_argument('--model', '-m', required=True, help='Path to the GGUF model file')
+    parser.add_argument('--ngl', type=int, required=True, help='Max number of GPU layers')
+    parser.add_argument('--llama-binary', default=default_bin,
+                       help='Path to llama-simple binary (default: ./build/bin/llama-simple)')
+    parser.add_argument('--n-calls', type=int, default=50,
+                       help='Number of optimization calls (default: 20)')
+    parser.add_argument('--cache', default='cache_simple.json',
+                       help='Cache file name (default: cache_simple.json)')
+    parser.add_argument('--single-execution', type=str,
+                       help='Run single execution with specified options (format: "--param1=value1 --param2=value2")')
+
+    args = parser.parse_args()
+    return args
+
+def main(args, get_opts_fn, run_binary_fn, run_options_list):
+
+    # Check if llama-simple binary exists
+    if not os.path.exists(args.llama_binary):
+        print(f"Error: llama-simple binary not found at {args.llama_binary}")
+        print("Please build llama.cpp first or specify correct path with --llama-binary")
+        return
+
+    # Check if model exists
+    if not os.path.exists(args.model):
+        print(f"Error: Model file not found at {args.model}")
+        return
+
+    # Handle single execution mode
+    if args.single_execution:
+        try:
+            print("Single execution")
+            run_options = args.single_execution
+            run_iterations(get_opts_fn, run_binary_fn, run_options, args.model, args.llama_binary)
+            return
+        except ValueError as e:
+            print(f"Error parsing single execution options: {e}")
+            return
+
+    # Initialize progress tracking
+    global progress_start_time, progress_total_calls
+    progress_start_time = time.time()
+    progress_total_calls = args.n_calls
+
+    # Create optimization dimensions
+    dimensions = [Categorical(opt[1]) for opt in run_options_list]
+    for i, opt in enumerate(run_options_list):
+        dimensions[i].name = opt[0]
+
+    # Load cache
+    x0, y0 = load_cache(args.cache)
+
+    # Create objective function
+    objective_function = partial(optimize_runtime_with_progress,
+                               get_opts_fn=get_opts_fn,
+                               run_binary_fn=run_binary_fn,
+                               run_options_list=run_options_list,
+                               model_path=args.model,
+                               llama_simple_path=args.llama_binary)
+
+    print(f"Starting optimization with {args.n_calls} calls and {args.ngl} gpu layers...")
+    print(f"Using model: {args.model}")
+    print(f"Cache file: {args.cache}")
+
+    # Run optimization
+    result = gp_minimize(objective_function, dimensions,
+                        n_calls=args.n_calls,
+                        n_initial_points=min(10, args.n_calls),
+                        random_state=42,
+                        x0=x0, y0=y0,
+                        initial_point_generator="lhs")
+
+    # Save results
+    save_cache(args.cache, result.x_iters, result.func_vals)
+
+    # Print results
+    print(f"\nBest options found: {result.x}")
+    print(f"Minimum eval time: {result.fun:.4f} seconds")
+
+    # Convert result.x back to human-readable format - FIX: Find index of value in options list
+    best_options = {}
+    for i, (name, options) in enumerate(run_options_list):
+        # Find the value in result.x[i] and locate its index in the options list
+        value = result.x[i]
+        if value in options:
+            best_options[name] = value
+        else:
+            # Fallback: use the first option if value not found
+            print(f"Warning: Value '{value}' not found in options for {name}, using first option")
+            best_options[name] = options[0]
+
+    print("\nBest configuration:")
+    for name, value in best_options.items():
+        print(f"  {name}: {value}")
+
+    min_x, _ = expected_minimum(result)
+    print(f"Expected minimum: {min_x}")
+
+    if BAD_CONFIGURATIONS:
+        print(f"\nBAD_CONFIGURATIONS: {len(BAD_CONFIGURATIONS)}")
+
+    # Plot results
+    try:
+        plot_iterations(result)
+        plot_objective(result)
+        # Might need PyQt6
+        plt.show()
+    except Exception as e:
+        print(f"Plotting failed: {e}")
diff --git a/scripts/tune/tune_quality.py b/scripts/tune/tune_quality.py
new file mode 100644
index 000000000..ffae255
--- /dev/null
+++ b/scripts/tune/tune_quality.py
@@ -0,0 +1,330 @@
+#!/usr/bin/env python3
+"""
+BERTScore-based translation quality optimization for llama.cpp models.
+Uses BERTScore to evaluate translation quality instead of HellaSwag accuracy.
+"""
+import subprocess
+import sys
+import os
+import re
+import json
+import hashlib
+import numpy as np
+from typing import Dict, List, Tuple, Any, Optional
+from collections import Counter
+
+# Import bert_score for translation quality evaluation
+import bert_score
+
+# Import language_tool_python for grammar checking
+import language_tool_python
+
+script_dir = os.path.dirname(os.path.abspath(__file__))
+sys.path.insert(0, script_dir)
+from tune import parse_args, main
+
+# Configuration
+BERTSCORE_MODEL = 'microsoft/deberta-v3-base'
+
+# Translation benchmarks for quality evaluation
+# Tiny subset of https://openslr.org/100
+TRANSLATION_BENCHMARKS = [
+    {
+        "prompt": "Translate the following English text to French:\n\nEnglish: As you can see, it does not look like a slam lesson, it is a language lesson, a language which allows to give orders to machines and computers the language of the 21st century: the computer code.\nFrench:",
+        "ground_truth": "Comme vous pouvez le constater, il ne s'agit pas d'un cours de slam, il s'agit d'un cours de langue, une langue qui permet de donner des ordres à des machines et à des ordinateurs, la langue du 21e siècle : le code informatique.",
+        "tool": "fr-FR"
+    },
+    {
+        "prompt": "Translate the following English text to Spanish:\n\nEnglish: Some years ago, when I was diving in the Lombok Strait, in Indonesia, 98 feet below the water, with that feeling of weightlessness, surrounded by a great biodiversity of reefs, corals, sea turtles, ocean sunfishes and fishes of all colors, I had an intense feeling of connection with nature.\nSpanish:",
+        "ground_truth": "Hace unos años, cuando me encontraba buceando en el estrecho de Lombok, en Indonesia, a 30 metros debajo del agua, con esa sensación de ingravidez, rodeado de una gran biodiversidad, de arrecifes, de corales, de tortugas, de peces mola mola y de peces de todos los colores, tuve una intensa sensación de estar conectado con la naturaleza.",
+        "tool": "es-ES"
+    },
+    {
+        "prompt": "Translate the following English text to Portuguese:\n\nEnglish: Have you ever stopped to think about clothes for disabled people?\nPortuguese:",
+        "ground_truth": "Vocês já pararam pra pensar como é o vestuário das pessoas com deficiência?",
+        "tool": "pt-PT"
+    }
+]
+
+def get_metrics(metrics_filepath: str, ground_truth: str, prediction: str, tool: str) -> Dict[str, Any]:
+    """
+    Calculate BERTScore and other quality metrics for translation evaluation.
+    Caches results to avoid recomputation.
+    """
+    print(f"Calculating metrics: {metrics_filepath}")
+
+    metrics = {
+        'bertscore_model': None,
+        'bertscore_P': None,
+        'bertscore_R': None,
+        'bertscore_F1': None,
+        'grammar_errors': None,
+        'repetition_score': None,
+        'objective_r': None
+    }
+
+    # Load cached scores
+    try:
+        with open(metrics_filepath, 'r', encoding='utf-8') as f:
+            metrics.update(json.load(f))
+    except FileNotFoundError:
+        pass
+
+    # Calculate BERTScore if not cached or model changed
+    if (not metrics["bertscore_P"] or not metrics["bertscore_R"] or
+        not metrics["bertscore_F1"] or metrics["bertscore_model"] != BERTSCORE_MODEL):
+        try:
+            metrics["bertscore_model"] = BERTSCORE_MODEL
+            score = bert_score.score([prediction], [ground_truth], model_type=BERTSCORE_MODEL)
+            metrics["bertscore_P"], metrics["bertscore_R"], metrics["bertscore_F1"] = (
+                score[0].item(), score[1].item(), score[2].item()
+            )
+        except Exception as e:
+            print(f"Warning: BERTScore calculation failed: {e}")
+            metrics["bertscore_P"] = metrics["bertscore_R"] = metrics["bertscore_F1"] = 0.0
+
+    # Calculate grammar errors if not cached
+    if metrics["grammar_errors"] is None:
+        metrics["grammar_errors"] = 0.0
+
+    language_tool = language_tool_python.LanguageTool(tool)
+    try:
+        matches = language_tool.check(prediction)
+        metrics["grammar_errors"] = len(matches) / max(len(prediction.split()), 1)
+    except Exception as e:
+        print(f"Warning: Grammar checking failed: {e}")
+        metrics["grammar_errors"] = 0.0
+
+    # Calculate repetition score if not cached
+    if metrics["repetition_score"] is None:
+        try:
+            words = prediction.split()
+            if len(words) > 0:
+                word_counts = Counter(words)
+                repeated_words = sum(count - 1 for count in word_counts.values() if count > 1)
+                metrics["repetition_score"] = repeated_words / len(words)
+            else:
+                metrics["repetition_score"] = 0.0
+        except Exception as e:
+            print(f"Warning: Repetition calculation failed: {e}")
+            metrics["repetition_score"] = 0.0
+
+    # Calculate objective score (we want to minimize this)
+    # Higher BERTScore Recall = better translation quality = lower objective value
+    # Add penalties for grammar errors and repetitions
+    if metrics["bertscore_R"] is not None:
+        grammar_penalty = metrics["grammar_errors"] * 0.1  # Small penalty for grammar errors
+        repetition_penalty = metrics["repetition_score"] * 0.05  # Small penalty for repetitions
+        metrics["objective_r"] = -(metrics["bertscore_R"] - grammar_penalty - repetition_penalty)
+    else:
+        metrics["objective_r"] = 1.0  # Bad score if BERTScore failed
+
+    # Save metrics to cache
+    try:
+        with open(metrics_filepath, 'w', encoding='utf-8') as f:
+            json.dump(metrics, f, indent=2, ensure_ascii=False)
+    except Exception as e:
+        print(f"Warning: Failed to save metrics: {e}")
+
+    return metrics
+
+def run_binary(run_options_str):
+    """Run the binary and evaluate translation quality using BERTScore."""
+    try:
+        # Parse the command to extract parameters
+        parts = run_options_str.split()
+        model_path = None
+        binary_path = None
+
+        # Find model path and binary path
+        for i, part in enumerate(parts):
+            if part == "-m" and i + 1 < len(parts):
+                model_path = parts[i + 1]
+            elif part.endswith("llama-cli") or part.endswith("main"):
+                binary_path = part
+
+        if not model_path or not binary_path:
+            print("Error: Could not parse model path or binary path from command")
+            return 100.0
+
+        # Create output directory for this run
+        run_hash = hashlib.md5(run_options_str.encode()).hexdigest()[:8]
+        output_dir = f"translation_eval_{run_hash}"
+        os.makedirs(output_dir, exist_ok=True)
+
+        all_scores = []
+
+        # Run translation benchmarks
+        for i, benchmark in enumerate(TRANSLATION_BENCHMARKS):
+            print(f"Running benchmark {i+1}/{len(TRANSLATION_BENCHMARKS)}")
+
+            # Build command for this benchmark - use the base command and add benchmark-specific params
+            benchmark_cmd = run_options_str.split()
+
+            # Add benchmark-specific parameters
+            benchmark_cmd.extend(["--prompt", benchmark["prompt"]])
+
+            # Run the command
+            try:
+                process = subprocess.run(benchmark_cmd,
+                                       stdout=subprocess.PIPE,
+                                       stderr=subprocess.PIPE,
+                                       timeout=120,  # 2 minute timeout per benchmark
+                                       check=False)
+
+                if process.returncode != 0:
+                    print(f"Warning: Benchmark {i+1} failed with return code {process.returncode}")
+                    print(f"STDERR: {process.stderr.decode()}")
+                    all_scores.append(1.0)  # Bad score for failed runs
+                    continue
+
+                # Extract prediction from output
+                output = process.stdout.decode()
+                prediction = output.strip()
+
+                # Remove the prompt from prediction if it's included
+                if benchmark["prompt"] in prediction:
+                    prediction = prediction.split(benchmark["prompt"])[-1].strip()
+
+                # Calculate metrics
+                metrics_filepath = os.path.join(output_dir, f"benchmark_{i}_metrics.json")
+                metrics = get_metrics(metrics_filepath,
+                                    benchmark["ground_truth"], prediction, benchmark["tool"])
+
+                objective_score = metrics.get("objective_r", 1.0)
+                all_scores.append(objective_score)
+
+                print(f"Benchmark {i+1} - BERTScore R: {metrics.get('bertscore_R', 0):.4f}, "
+                      f"Objective: {objective_score:.4f}")
+
+            except subprocess.TimeoutExpired:
+                print(f"Warning: Benchmark {i+1} timed out")
+                all_scores.append(1.0)  # Bad score for timeouts
+            except Exception as e:
+                print(f"Error running benchmark {i+1}: {e}")
+                all_scores.append(1.0)  # Bad score for errors
+
+        # Calculate average score across all benchmarks
+        if all_scores:
+            avg_score = np.mean(all_scores)
+            print(f"Average translation quality objective score: {avg_score:.4f}")
+            return avg_score
+        else:
+            print("Warning: No successful benchmarks")
+            return 100.0  # Bad score if no benchmarks succeeded
+
+    except Exception as e:
+        print(f"Error in run_binary: {e}")
+        return 100.0  # Bad score for any other errors
+
+if __name__ == "__main__":
+    args = parse_args(default_bin='./build/bin/llama-cli')
+
+    # Define quality-focused sampling parameters for optimization
+    run_options_list = [
+        # Core Sampling Parameters (Most Critical for Quality)
+
+        # 1. Temperature - Controls randomness vs determinism
+        ("--temp", [
+            "--temp 0.1",   # Very focused, deterministic
+            "--temp 0.3",   # Focused, good for factual tasks
+            "--temp 0.5",   # Moderate creativity
+            "--temp 0.7",   # Balanced (recommended default)
+            "--temp 0.8",   # Good balance
+            "--temp 0.9",   # More creative
+            "--temp 1.0",   # Creative but coherent
+            "--temp 1.2"    # More creative, potentially less coherent
+        ]),
+
+        # 2. Top-p (Nucleus Sampling) - Controls diversity while maintaining quality
+        ("--top-p", [
+            "--top-p 0.5",   # Very focused
+            "--top-p 0.7",   # Focused, higher quality
+            "--top-p 0.8",   # Good balance
+            "--top-p 0.85",  # Balanced
+            "--top-p 0.9",   # Good balance (recommended)
+            "--top-p 0.95",  # Standard default
+            "--top-p 0.98",  # More diverse
+            "--top-p 1.0"    # No nucleus filtering
+        ]),
+
+        # 3. Top-k - Limits token selection to most probable candidates
+        ("--top-k", [
+            "--top-k 10",   # Very focused
+            "--top-k 20",   # More focused, higher quality
+            "--top-k 30",   # Balanced
+            "--top-k 40",   # Good balance (default)
+            "--top-k 50",   # Balanced, more diverse
+            "--top-k 60",   # More diverse
+            "--top-k 80",   # Very diverse
+            "--top-k 100"   # Most diverse
+        ]),
+
+        # 4. Min-p - Filters out low-probability tokens
+        ("--min-p", [
+            "--min-p 0.01",  # Very permissive
+            "--min-p 0.02",  # Permissive
+            "--min-p 0.05",  # Good default
+            "--min-p 0.08",  # More restrictive
+            "--min-p 0.1",   # Restrictive, higher quality
+            "--min-p 0.15",  # Very restrictive
+            "--min-p 0.2"    # Extremely restrictive
+        ]),
+
+        # Repetition Control (Critical for Coherence)
+
+        # 5. Repeat Penalty - Prevents repetitive text
+        ("--repeat-penalty", [
+            "--repeat-penalty 1.0",   # Disabled
+            "--repeat-penalty 1.02",  # Very light penalty
+            "--repeat-penalty 1.05",  # Light penalty (recommended)
+            "--repeat-penalty 1.1",   # Moderate penalty (recommended)
+            "--repeat-penalty 1.15",  # Moderate-strong penalty
+            "--repeat-penalty 1.2",   # Strong penalty
+            "--repeat-penalty 1.25",  # Very strong penalty
+            "--repeat-penalty 1.3"    # Extreme penalty
+        ]),
+
+        # 6. Repeat Last N - How far back to look for repetitions
+        ("--repeat-last-n", [
+            "--repeat-last-n 16",   # Short context
+            "--repeat-last-n 32",   # Short-medium context
+            "--repeat-last-n 64",   # Balanced default
+            "--repeat-last-n 96",   # Medium-large context
+            "--repeat-last-n 128",  # Large context
+            "--repeat-last-n 192",  # Very large context
+            "--repeat-last-n 256"   # Maximum context
+        ]),
+
+        # Advanced Quality Parameters
+
+        # 7. Typical-p - Promotes contextually coherent tokens
+        ("--typical", [
+            "--typical 1.0",   # Disabled
+            "--typical 0.95",  # Light filtering
+            "--typical 0.9",   # Recommended for quality
+            "--typical 0.85",  # Moderate filtering
+            "--typical 0.8",   # Strong filtering
+            "--typical 0.75",  # Very strong filtering
+            "--typical 0.7"    # Extreme filtering
+        ]),
+
+        # 8. Mirostat - Adaptive sampling for consistent quality
+        ("--mirostat", [
+            "--mirostat 0",  # Disabled (default)
+            "--mirostat 1",  # Mirostat v1
+            "--mirostat 2"   # Mirostat v2 (often better quality)
+        ]),
+
+        # Keep seed constant for reproducible results
+        ("--seed", ["-s 42"]),
+    ]
+
+    def run_str(run_options, model_path, binary_path):
+        """Build command string for llama-cli with translation evaluation."""
+        if isinstance(run_options, dict):
+            run_options = " ".join(run_options.values())
+        # Use the main binary for translation evaluation
+        return f"{binary_path} -m {model_path} --threads 8 -ngl {args.ngl} {run_options}"
+
+    main(args, run_str, run_binary, run_options_list)
diff --git a/scripts/tune/tune_quality_swag.py b/scripts/tune/tune_quality_swag.py
new file mode 100644
index 000000000..1eaedad
--- /dev/null
+++ b/scripts/tune/tune_quality_swag.py
@@ -0,0 +1,172 @@
+import subprocess
+import sys
+import os
+import re
+
+script_dir = os.path.dirname(os.path.abspath(__file__))
+sys.path.insert(0, script_dir)
+from tune import parse_args, main
+
+def run_binary(run_options_str):
+    """Run the binary and parse HellaSwag accuracy score."""
+    try:
+        process = subprocess.run(run_options_str,
+                                 stdout=subprocess.PIPE,
+                                 stderr=subprocess.PIPE,
+                                 shell=True,
+                                 check=False,  # Don't raise exception on non-zero exit
+                                 timeout=300   # 5 minute timeout
+                                 )
+
+        if process.returncode != 0:
+            print(f"Warning: Process returned non-zero exit code: {process.returncode}")
+            print(f"STDERR: {process.stderr.decode()}")
+            return 100.0  # Return bad score for failed runs
+
+        # Parse HellaSwag accuracy from stdout
+        stdout_text = process.stdout.decode()
+        stderr_text = process.stderr.decode()
+
+        # Look for HellaSwag accuracy patterns in output
+        # Pattern for format: "20      75.00000000%    [53.1299%, 88.8138%]"
+        accuracy_patterns = [
+            r"20\t([\d.]+)%\t\[",
+        ]
+
+        accuracy = None
+        for pattern in accuracy_patterns:
+            match = re.search(pattern, stdout_text, re.IGNORECASE)
+            if match:
+                accuracy = float(match.group(1))
+                # Convert percentage to decimal if needed (values > 1.0 are likely percentages)
+                if accuracy > 1.0:
+                    accuracy = accuracy / 100.0
+                break
+
+        if accuracy is None:
+            print("Warning: Could not parse HellaSwag accuracy from output")
+            print("STDOUT:", stdout_text[:500])  # Show first 500 chars
+            print("STDERR:", stderr_text[:500])
+            return 100.0  # Return bad score for unparseable results
+        else:
+            print(f"HellaSwag accuracy: {accuracy:.4f}")
+
+        # Return negative accuracy since we want to MINIMIZE the objective function
+        # (higher accuracy = lower objective value = better)
+        return -accuracy
+
+    except subprocess.TimeoutExpired:
+        print("Warning: Process timed out")
+        return 100.0  # Return bad score for timeouts
+    except Exception as e:
+        print(f"Error running command: {e}")
+        return 100.0  # Return bad score for other errors
+
+if __name__ == "__main__":
+    args = parse_args(default_bin='./build/bin/llama-perplexity')
+
+    # Define quality-focused sampling parameters for optimization
+    run_options_list = [
+        # Core Sampling Parameters (Most Critical for Quality)
+
+        # 1. Temperature - Controls randomness vs determinism
+        ("--temp", [
+            "--temp 0.1",   # Very focused, deterministic
+            "--temp 0.3",   # Focused, good for factual tasks
+            "--temp 0.5",   # Moderate creativity
+            "--temp 0.7",   # Balanced (recommended default)
+            "--temp 0.8",   # Good balance
+            "--temp 0.9",   # More creative
+            "--temp 1.0",   # Creative but coherent
+            "--temp 1.2"    # More creative, potentially less coherent
+        ]),
+
+        # 2. Top-p (Nucleus Sampling) - Controls diversity while maintaining quality
+        ("--top-p", [
+            "--top-p 0.5",   # Very focused
+            "--top-p 0.7",   # Focused, higher quality
+            "--top-p 0.8",   # Good balance
+            "--top-p 0.85",  # Balanced
+            "--top-p 0.9",   # Good balance (recommended)
+            "--top-p 0.95",  # Standard default
+            "--top-p 0.98",  # More diverse
+            "--top-p 1.0"    # No nucleus filtering
+        ]),
+
+        # 3. Top-k - Limits token selection to most probable candidates
+        ("--top-k", [
+            "--top-k 10",   # Very focused
+            "--top-k 20",   # More focused, higher quality
+            "--top-k 30",   # Balanced
+            "--top-k 40",   # Good balance (default)
+            "--top-k 50",   # Balanced, more diverse
+            "--top-k 60",   # More diverse
+            "--top-k 80",   # Very diverse
+            "--top-k 100"   # Most diverse
+        ]),
+
+        # 4. Min-p - Filters out low-probability tokens
+        ("--min-p", [
+            "--min-p 0.01",  # Very permissive
+            "--min-p 0.02",  # Permissive
+            "--min-p 0.05",  # Good default
+            "--min-p 0.08",  # More restrictive
+            "--min-p 0.1",   # Restrictive, higher quality
+            "--min-p 0.15",  # Very restrictive
+            "--min-p 0.2"    # Extremely restrictive
+        ]),
+
+        # Repetition Control (Critical for Coherence)
+
+        # 5. Repeat Penalty - Prevents repetitive text
+        ("--repeat-penalty", [
+            "--repeat-penalty 1.0",   # Disabled
+            "--repeat-penalty 1.02",  # Very light penalty
+            "--repeat-penalty 1.05",  # Light penalty (recommended)
+            "--repeat-penalty 1.1",   # Moderate penalty (recommended)
+            "--repeat-penalty 1.15",  # Moderate-strong penalty
+            "--repeat-penalty 1.2",   # Strong penalty
+            "--repeat-penalty 1.25",  # Very strong penalty
+            "--repeat-penalty 1.3"    # Extreme penalty
+        ]),
+
+        # 6. Repeat Last N - How far back to look for repetitions
+        ("--repeat-last-n", [
+            "--repeat-last-n 16",   # Short context
+            "--repeat-last-n 32",   # Short-medium context
+            "--repeat-last-n 64",   # Balanced default
+            "--repeat-last-n 96",   # Medium-large context
+            "--repeat-last-n 128",  # Large context
+            "--repeat-last-n 192",  # Very large context
+            "--repeat-last-n 256"   # Maximum context
+        ]),
+
+        # Advanced Quality Parameters
+
+        # 7. Typical-p - Promotes contextually coherent tokens
+        ("--typical", [
+            "--typical 1.0",   # Disabled
+            "--typical 0.95",  # Light filtering
+            "--typical 0.9",   # Recommended for quality
+            "--typical 0.85",  # Moderate filtering
+            "--typical 0.8",   # Strong filtering
+            "--typical 0.75",  # Very strong filtering
+            "--typical 0.7"    # Extreme filtering
+        ]),
+
+        # 8. Mirostat - Adaptive sampling for consistent quality
+        ("--mirostat", [
+            "--mirostat 0",  # Disabled (default)
+            "--mirostat 1",  # Mirostat v1
+            "--mirostat 2"   # Mirostat v2 (often better quality)
+        ]),
+
+        # Keep seed constant for reproducible results
+        ("--seed", ["-s 42"]),
+    ]
+    def run_str(run_options, model_path, binary_path):
+        """Build command string for llama-perplexity with hellaswag evaluation."""
+        run_opts = " ".join(run_options.values())
+        # Use the perplexity command with hellaswag evaluation as specified
+        return f"{binary_path} -m {model_path} -f hellaswag_val_full.txt --hellaswag-tasks 20 --hellaswag -ngl {args.ngl} {run_opts}"
+    main(args, run_str, run_binary, run_options_list)
diff --git a/scripts/tune/tune_requirements.txt b/scripts/tune/tune_requirements.txt
new file mode 100644
index 000000000..50cb56b
--- /dev/null
+++ b/scripts/tune/tune_requirements.txt
@@ -0,0 +1,3 @@
+language_tool_python
+bert_score
+scikit-optimize
diff --git a/scripts/tune/tune_tps.py b/scripts/tune/tune_tps.py
new file mode 100644
index 000000000..8584713
--- /dev/null
+++ b/scripts/tune/tune_tps.py
@@ -0,0 +1,80 @@
+import subprocess
+import sys
+import os
+import re
+
+script_dir = os.path.dirname(os.path.abspath(__file__))
+sys.path.insert(0, script_dir)
+from tune import parse_args, main
+
+def run_str(run_options, model_path, binary_path):
+        run_opts = " ".join(run_options.values())
+        return f"{binary_path} -m {model_path} -p 'Hello, how are you?' -n 1 {run_opts}"
+
+def run_binary(run_options_str):
+        process = subprocess.run(run_options_str,
+                                 stdout=subprocess.PIPE,
+                                 stderr=subprocess.PIPE,
+                                 shell=True,
+                                 check=True,
+                                 )
+        if process.returncode != 0:
+            raise Exception(f"Error running: '{run_options_str}':\n{process.stderr}")
+
+        # Parse timing information from stderr
+        stderr_text = process.stderr.decode()
+
+        # Updated regex patterns for llama-simple output
+        prompt_eval_time_pattern = r"prompt eval time\s*=\s*([\d.]+)\s*ms"
+        eval_time_pattern = r"eval time\s*=\s*([\d.]+)\s*ms"
+
+        prompt_match = re.search(prompt_eval_time_pattern, stderr_text)
+        eval_match = re.search(eval_time_pattern, stderr_text)
+
+        if prompt_match and eval_match:
+            prompt_eval_time = float(prompt_match.group(1)) / 1000  # Convert to seconds
+            eval_time = float(eval_match.group(1)) / 1000  # Convert to seconds
+        else:
+            # Fallback: look for any timing patterns
+            print("Warning: Could not parse timing info, using fallback")
+            print("STDERR:", stderr_text)
+            return 1000  # High penalty for failed parsing
+
+        print("prompt eval time:", prompt_eval_time)
+        print("eval time:", eval_time)
+
+        return eval_time
+
+if __name__ == "__main__":
+    args = parse_args(default_bin='./build/bin/llama-cli')
+    # Define runtime options to optimize - Core Performance Parameters
+    run_options_list = [
+        # 1. Batch Processing Parameters (most critical for throughput)
+        ("--batch-size", ["--batch-size 31", "--batch-size 64", "--batch-size 128", "--batch-size 256", "--batch-size 512", "--batch-size 1024", "--batch-size 2048"]),
+        ("--ubatch-size", ["--ubatch-size 32", "--ubatch-size 64", "--ubatch-size 128", "--ubatch-size 256", "--ubatch-size 512"]),
+
+        # 2. Context and Memory Parameters
+        ("--ctx-size", ["-c 512", "-c 1024", "-c 2048", "-c 4096", "-c 8192"]),
+        ("--defrag-thold", ["--defrag-thold -1", "--defrag-thold 0.1", "--defrag-thold 0.2", "--defrag-thold 0.5"]),
+
+        # 3. GPU Offloading Parameters (critical for GPU performance)
+        # Set range to a value that makes sense for your model
+        ("--n-gpu-layers", [f"--n-gpu-layers {i}" for i in range(args.ngl)]),
+
+        # 4. CPU Optimization Parameters
+        ("--threads", ["-t 4", "-t 8", "-t 12", "-t 16"]),
+        # ("--prio", ["--prio 0", "--prio 1", "--prio 2"]),
+
+        # 5. Memory and Caching Parameters
+        # ("--use-mmap", ["", "--no-mmap"]),
+        ("--use-mlock", ["--mlock", ""]),
+        ("--kv-unified", ["--kv-unified", ""]),
+
+        # 6. Advanced Performance Features
+        ("--flash-attn", ["--flash-attn", ""]),
+        # ("--no-kv-offload", ["--no-kv-offload", ""]),  # Empty string means don't use the flag
+
+        # Keep seed constant for reproducible results
+        ("--seed", ["-s 42"]),
+    ]
+    main(args, run_str, run_binary, run_options_list)
bl4ckb0ne and others added 22 commits January 8, 2026 11:07
Co-authored-by: Italo Nicola <italo.nicola@collabora.com>
Co-authored-by: Lubosz Sarnecki <lubosz.sarnecki@collabora.com>

Signed-off-by: Simon Zeni <simon.zeni@collabora.com>
This new feature is only used by im2col and im2col_3d, and was missed when moving to VMA.
* vulkan: allocate aligned buffer for host

* Vulkan: make sure staging buffers are host visible and coherent

---------

Co-authored-by: Italo Nicola <italo.nicola@collabora.com>
Merge temp-inference-collabora into temp-7248
QVAC-11254: Merge v7248.1.0 into master
* Fix lib name resolution (tetherto#95)

* Update backend filename prefix for Windows and Linux to use 'qvac-ggml-'

* fix cmake exports

* add macro guards to prevent dlopen when dynamic backends disabled

* Update README.md

* metal : remove BF16 x F16 kernels (ggml-org#18456) (tetherto#97)

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* QVAC-13378: Model Metadata from .gguf without full loading (tetherto#100)

* add metadata helper

* improve_meta_handle

* improve_ptrs

* bugfix

* fixupIncrementalLoadingDupTensors

* fixupNamespaceDeletedMistake

* optimizeKvOnly

* attemptFixCi

* fixupEnumPollution

---------

Co-authored-by: gianni-cor <gianfrancocordella@gmail.com>

* meta_get_str (tetherto#101)

* Update linux runner for ubuntu-24-cmake-vulkan

* Update build.yml

* LoRA Finetuning (tetherto#99)

* Add TQ2_0 and TQ1_0 support to the Metal backend. (tetherto#85)

* Add TQ2_0 and TQ1_0 support to the Metal backend.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add tq2_0/q8_0 fallback aliases for loongarch/riscv.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Resolve macro function for tq2_0/q8_0/q8_1 and split into two seperate functions.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add missing backslash to fix the macOS CI workflow.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* The Metal compiler doesn't allow constant address space on local variables.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix visionOS builds with LLAMA_HTTPLIB=OFF.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix WASM WebGPU builds with DLLAMA_BUILD_TOOLS=OFF.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

---------

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add inference support for BitNet models using Vulkan (tetherto#98)

* ggml-vulkan: Add TQ2_0 dequantize and mul_mat vec

* ggml-vulkan: Enable coopmat support for Android

* ggml-vulkan: Add mul_mm path for TQ2_0

* SET_ROWS and GET_ROWS has no TQ2_0 support yet.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Vulkan: Fix TQ2_0 mul_mm pipeline

* Add support for microsoft/bitnet-b1.58-2B-4T (HF to GGUF).

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Vulkan: TQ2_0 x Q8_1 MUL_MAT perf improvements

* Vulkan: Add TQ1_0 infra

* Vulkan: Add MUL_MAT_MAT and MUL_MAT_VEC support for TQ1

* Make sure we report the supported ops + datatypes.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

---------

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>
Co-authored-by: vineet <vineet.suryan@collabora.com>
Co-authored-by: Marcus Edel <marcus.edel@collabora.com>
Co-authored-by: Italo Nicola <italo.nicola@collabora.com>

* Ignore GGML_OP_SET_ROWS parameters during gradient calculation, since there is no effect on the output gradients.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add lora finetuning from adapter

* Add: create new lora adapter for target modules to finetune if no lora is provided

* Fix identical loss over epochs; fix garbage lora initization

Signed-off-by: vineet <vineet.suryan@collabora.com>

* Remove lora training from finetune.cpp

Signed-off-by: vineet <vineet.suryan@collabora.com>

* Add adapter saving & other lora target modules

Signed-off-by: vineet <vineet.suryan@collabora.com>

* Add finetune-lora for lora finetuning in examples

Signed-off-by: vineet <vineet.suryan@collabora.com>

* Update README with finetune-lora

Signed-off-by: vineet <vineet.suryan@collabora.com>

* Add dequantization to out_prod cuda kernel

Signed-off-by: vineet <vineet.suryan@collabora.com>

* CPU: add support for fp16_fp32 OUT_PROD op

* Remove unused variable val_split.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Explicitly define the optimizer, to fix missing initializer for member issue.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* finetune-lora: Add checkpoint saving & resuming from saved checkpoint
This commit adds checkpointing for fine-tuning:
- Add checkpoint saving every N steps with --checkpoint-save-steps
- Save complete training state: model weights, optimizer state, metadata
- Implement two-phase optimizer state loading to avoid memory issues
- Add --resume-from and --auto-resume functionality
- Store optimizer momentum/variance tensors in GGUF format
- Add checkpoint validation for rank, alpha, and target modules
- Update README.md with checkpointing documentation

The optimizer state loading: iteration count is loaded during initialization,
while tensor data (grad_m, grad_v) is loaded after ggml_opt_alloc creates
the proper tensor structures.

* Add simple test to choose the right datatype based on the supported OUT_PROD datatype implementation.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add OUT_PROD, RMS_NORM_BACK, SILU_BACK metal shader.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* lora: Fix LoRA K/V gradient flow with gradient-connected kv cache retrieval

Add get_k_lora() and get_v_lora() methods that use concatenation

instead of ggml_view_4d to maintain gradient connectivity during

training. This ensures LoRA K/V parameters receive proper gradients

while preserving causal attention behavior.

* lora: Add Instruction Finetuning support

- Add masked loss computation on assistant responses only

- Implement Vulkan masked cross-entropy loss shader & count_equal shader
- Support default ChatML template & custom jinja chat templates

* Add SOFT_MAX_BACK metal kernel.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Extend swift example app with finetuning support.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix Q4 OUT_PROD iq upper handling.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add learning rate scheduler: constant (default), linear, and cosine.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add warmup-ratio parameter to match HF training.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* lora: Fix lr assertion on step 0

* lora: Fix training start from step 2

* Added

* Updating code to enable mid-epoch cancellation

* cpp lint applied

* Fix geglu_back implementation

- Fix CPU implementation: now correctly computes gelu_backward(gate, grad) instead of
splitting computation across two halves
- Update Vulkan shader to match corrected implementation with proper gelu_backward
- Add a test for geglu_back op

The previous implementation incorrectly assumed geglu_back operated on concatenated
tensors and split them. The correct implementation computes the GELU backward pass
element-wise on the gate values.

* Gemma Chat Template Support for LoRA Finetuning

- Add auto-detection for Gemma format (<start_of_turn>model\n...<end_of_turn>)
- Falls back to ChatML format for other models
- Uses models default chat-template i.e. no need for jinja chat-template

This enables instruction finetuning on any model.

* Fixed ibatch Mismatch in llama_opt_epoch Resume

* CPP lint ran

* lora: Update readme; add architecture overview

* Add guide about how to support a new model.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Simplify main README to focus on LoRA finetuning. (tetherto#71)

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Vulkan: add support for fp32 OUT_PROD op

* Vulkan: add support for f16_f32 OUT_PROD op

* Vulkan: Add Q4_0/Q8_0 OUT_PROD Vulkan support

* vulkan: Add initial cross entropy loss backward shader

Signed-off-by: vineet <vineet.suryan@collabora.com>

* vulkan: Fix cross-entropy-loss-back dispatch size and wg denominator

Signed-off-by: vineet <vineet.suryan@collabora.com>

* vulkan: Change uint32 cast to int32 for outprod; allows android compilation

Signed-off-by: vineet <vineet.suryan@collabora.com>

* vulkan: Set specialization constants to { 0 } for out_prod

This fixes the vkDeviceLostError on Mali

* vulkan: Set out_prod pipeline disable_robustness to true

* Fix out_prod; vulkan ci issues

* Add GEGLU backward (Vulkan) to enable Gemma training.

* Vulkan: Clean up OUT_PROD shader and pipelines

Shouldn't change any behavior since currently nb00 is always 1.
Robustness is usually disabled for Q8/Q4 shaders since having it enabled
impacts performance more significantly for those types than F16/F32.

* Vulkan: Improve Q8 OUT_PROD performance

Increase OUT_PROD Q8 performance through improving memory locality.

* metal: port OUT_PROD, SILU_BACK, SOFT_MAX_BACK, RMS_NORM_BACK ops to split architecture

* Backport shader.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Initialize sin_sign in rope kargs to fix broken positional encoding.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix Windows build by using path::string() for wchar_t conversion

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix format specifiers for int64_t portability.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add missing resume_from_batch arg to llama_opt_epoch call.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix TQ2_0 dequantization.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix slow ReBAR reads on discrete GPUs and relax contiguity checks for backward pass.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Use VMA random-access host alloc, skip n_ctx padding and host-buft override during training.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Fix loss calculation and TQ2_0 dequantization.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* ggml-vulkan: workaround for Adreno MUL_MAT Q6_K

* ggml-vulkan: workaround for Adreno MUL_MAT TQ1

* vulkan: revert graph_optimize skip for prompt processing

* vulkan: ensure host coherent memory on UMA devices

Signed-off-by: vineet <vineet.suryan@collabora.com>

* ggml-vulkan: fix GGML_VULKAN_CHECK_RESULTS

* ggml-vulkan: skip CROSS_ENTROPY_LOSS_MASKED for check_results

* ggml-vulkan: skip COUNT_EQUAL_MASKED for check_results

* ggml-vulkan: improve OUT_PROD Q4 performance

* Fix LLAMA_LORA_TARGET_ALL bitmask

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* Preserve C API compatibility for llama_opt_epoch

Add llama_opt_epoch_resume function for the resume-from-batch
use case and update callers accordingly.

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* lora: enhance LoRA init safety and simplify caller

- Add overflow and error checks for snprintf when generating LoRA tensor names
- Encapsulate tensor pointer validation within llama_lora_init_tensor_weights()
  and return bool to simplify the caller

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: add llama_opt_default_params and use it in examples

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: add reproducible seed, improve safety and style in LoRA training

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: refactor LoRA tensor init to use exceptions

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* ggml-opt: refactor batch memory copying to use lambda

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* fix: typo in ggml.c & README

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: document masking constraints and fix metadata extension

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* tests: add ops tests for cross_entropy_loss_masked

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* Add bounds check for --chat-template argument parsing & remove stray backslash

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: add TODO for refactoring CLI argument parsing

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: add a comment about dropout not being used yet

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* training: add static_assert to catch llama_layer padding issues

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* graph: restore ggml_view_4d for non-contiguous Q tensor support

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* ggml-vulkan: Add buffer sync to cross_entropy_loss_masked_back op

* ggml-vulkan: add support for tiling as a workaround for memory issues

* The Metal ADD shader already uses strides for indexing, so non-contiguous tensors work correctly.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Wrap tensor and make it contiguous.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Add comment on graph_max_nodes bump for LoRA finetuning.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* fix: resume_from_batch=0 incorrectly treated as no-resume in opt_epoch

llama_opt_epoch_resume accepts a resume_from_batch parameter where -1
means "no resume, start from the beginning." However, opt_epoch used
`resume_from_batch > 0` to distinguish resume from non-resume, which
means resume_from_batch=0 (a valid value meaning "batch 0 was the last
completed, start from batch 1") was silently treated as no-resume,
causing the entire epoch to replay from the start.

This affects any caller that pauses training after the first batch of
an epoch (globalStep=1, or any globalStep that is a multiple of
stepsPerEpoch + 1), since the computed resume batch offset modulo
stepsPerEpoch is 0.

Fix: change `> 0` to `>= 0` in both the idata start position and the
idata_in_loop calculation, so that -1 remains the only sentinel for
"no resume."

Made-with: Cursor

* Fix memory leak in optimizer state loading.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Disable command-buffer concurrency by default on iOS.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* Override and default to n_cb=2 on iOS.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* fix: restore context state for inference after training cleanup

Save and restore n_ctx_train in opt_init/opt_cleanup to prevent
training from permanently modifying the model's context length.
Reset the scheduler and clear the previous graph result in opt_cleanup
so the context can be reused for inference after finetuning.

Made-with: Cursor

* Add @autoreleasepool to encode_async block to prevent ObjC object accumulation on GCD worker threads.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* fix: keep output tensor on CPU for iOS to avoid Metal buffer limits

On iOS, cap GPU-offloaded layers at n_layer (excluding the output layer)
to prevent exceeding Metal memory constraints on mobile devices.

Made-with: Cursor

* Remove unused variable 'tensor_name'.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

* training: fix LLAMA_LORA_TARGET_ALL for ISO C compliance

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* ci: disable native CPU optimizations for x64-cpu-low-perf builds

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* ci: increase timeout for ubuntu-24-cmake-vulkan tests

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* Add resume-from-checkpoint support to Metal LoRA fine-tuning

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* Fix missing parameters for llama_swift_finetune_options

Signed-off-by: Italo Nicola <italo.nicola@collabora.com>

* tests: disable TQ2_0 tests in test-backend-ops due to llvmpipe bug

Temporarily disable TQ2_0 quantization tests to work around a bug in
llvmpipe. Tests pass successfully on all real Vulkan hardware
(Nvidia, ARM GPUs) but fail on llvmpipe with high error values.

Signed-off-by: makaveli10 <vineet.suryan@collabora.com>

* Enable the tests again.

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>

---------

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>
Signed-off-by: vineet <vineet.suryan@collabora.com>
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Signed-off-by: Italo Nicola <italo.nicola@collabora.com>
Co-authored-by: gianni <gianfranco.cordella@tether.io>
Co-authored-by: vineet <vineet.suryan@collabora.com>
Co-authored-by: Italo Nicola <italo.nicola@collabora.com>
Co-authored-by: Nidhin <nidhinpd811@gmail.com>
Co-authored-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Co-authored-by: gianni-cor <gianfrancocordella@gmail.com>

---------

Signed-off-by: Marcus Edel <marcus.edel@collabora.com>
Signed-off-by: vineet <vineet.suryan@collabora.com>
Signed-off-by: makaveli10 <vineet.suryan@collabora.com>
Signed-off-by: Italo Nicola <italo.nicola@collabora.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jesús <jesus.mb.1995@gmail.com>
Co-authored-by: Marcus Edel <marcus.edel@fu-berlin.de>
Co-authored-by: vineet <vineet.suryan@collabora.com>
Co-authored-by: Italo Nicola <italo.nicola@collabora.com>
Co-authored-by: Nidhin <nidhinpd811@gmail.com>
Co-authored-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
* resume patch removed (tetherto#103)

* Update Readme (tetherto#104)

---------

Co-authored-by: dev-nid <nidhinpd811@gmail.com>
* toggle DLLAMA BUILD EXAMPLES to ON to included llama-finetune binaries

* remove atrifacts with error

---------

Co-authored-by: akshay <akshay@peartree.to>
* toggle DLLAMA BUILD EXAMPLES to ON to included llama-finetune binaries

* remove atrifacts with error

* correct dll name for ggml-vulkan

* add android build

---------

Co-authored-by: akshay <akshay@peartree.to>
Strip trailing whitespace, add missing final newlines, fix tab-to-space
indent, remove UTF-8 BOM, and exclude vendor/ from editorconfig checks.
Update BitNet model link in README.md
@GuthL GuthL requested a review from a team as a code owner March 19, 2026 10:05
@GuthL
Copy link
Copy Markdown
Author

GuthL commented Mar 19, 2026

I pushed an updated fix on this branch.

The original change narrowed GGML_OP_OUT_PROD support to stop the immediate CUDA crash, but that was only a minimal guard. The current version keeps CUDA support for mixed input types and converts non-f32 inputs to temporary f32 buffers inside ggml_cuda_out_prod, which matches how the LoRA trainer probes and uses OUT_PROD (f16 x f32 -> f32).

So the branch now fixes the crash without dropping the mixed-input CUDA path that the trainer expects.

@gianni-cor gianni-cor force-pushed the master branch 3 times, most recently from 992e34c to b9ac720 Compare March 23, 2026 19:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.