Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
6116ea5
Use runs-on GPU runners for CI
dfalbel Apr 28, 2026
69013df
Revert container to ubuntu18.04 for CUDA 11.2 compatibility
dfalbel Apr 28, 2026
f5aa3ac
Use CUDA 11.2.2 container (11.2.1 removed from Docker Hub)
dfalbel Apr 28, 2026
53d2f8f
Bump container to ubuntu20.04 (18.04 glibc too old for Node 20 actions)
dfalbel Apr 28, 2026
1586f9c
Split CI into build-image (free runner) and test-gpu (GPU runner)
dfalbel Apr 28, 2026
93d564a
Fix sklearn install: use scikit-learn package name and py_require()
dfalbel Apr 28, 2026
cc90bdf
Fix configure warnings: normalizePath ordering and cmake unused variable
dfalbel Apr 28, 2026
843cc4e
Fix TSVD tests for SVD sign ambiguity between cuML and sklearn
dfalbel Apr 28, 2026
f626979
Fix sklearn max_iter type: use integer (10000L) not float (10000.0)
dfalbel Apr 28, 2026
0203f78
Add CRAN-like check job (no CUDA, stub headers, ubuntu-latest)
dfalbel Apr 28, 2026
a65f660
Update roxygen
t-kalinowski Apr 24, 2026
ee3b9a4
export S3 methods
t-kalinowski Apr 24, 2026
0a55c04
roxygen updates
t-kalinowski Apr 24, 2026
c8dd8ad
Fix CRAN check: escape Rd braces, skip tests without cuML
dfalbel Apr 28, 2026
15c988a
Fix examples brace escaping and register S3 methods
dfalbel Apr 29, 2026
0ca00e9
Add RAPIDS cuML 26.04 + CUDA 12 support
dfalbel Apr 29, 2026
135d18a
Test both cuML 21.12 and 26.04 in CI
dfalbel Apr 29, 2026
e618aa3
Fix rapids-cmake version and lib symlink for dual cuML support
dfalbel Apr 29, 2026
c8bd6ed
Derive rapids-cmake tag from cuML version instead of hardcoding
dfalbel Apr 29, 2026
1591b2b
Require cmake 3.30.4+ for cuML 26.04 (auto-downloaded if missing)
dfalbel Apr 29, 2026
fd4fb4e
Fix cuML 26.04 build: raft/rmm deps, static_assert, device_allocator
dfalbel Apr 29, 2026
145b2b9
Resolve cuML PyPI deps dynamically instead of hardcoding URLs
dfalbel Apr 29, 2026
45e3dd2
Download CCCL 3.3 headers for cuML 26.04 builds
dfalbel Apr 29, 2026
bb27ca1
Put CUML_INCLUDE_DIR before CUDA toolkit includes
dfalbel Apr 29, 2026
189371f
Fix CCCL compat, pinned_allocator removal, and raft handle API
dfalbel Apr 29, 2026
59138d0
Switch to cuML 25.12 (no CCCL 3.x requirement)
dfalbel Apr 29, 2026
473db2a
Define LIBCUDACXX_ENABLE_EXPERIMENTAL_MEMORY_RESOURCE for RMM
dfalbel Apr 29, 2026
0b13f1f
Revert cuML 25.x/26.x support (CCCL 3.x incompatible with CUDA 12)
dfalbel Apr 29, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,6 @@
^libcuml/*
^\.github$
^\.lsan-suppressions\.txt$
^\.positai$
^\.claude$
^\.codex$
53 changes: 53 additions & 0 deletions .github/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
FROM nvidia/cuda:11.2.2-devel-ubuntu20.04

ENV DEBIAN_FRONTEND=noninteractive

# System dependencies
RUN apt-get update -y && apt-get install -y \
sudo software-properties-common dialog apt-utils \
tzdata locales curl wget git \
libcurl4-openssl-dev libssl-dev libxml2-dev \
libfontconfig1-dev libfreetype6-dev libpng-dev \
libharfbuzz-dev libfribidi-dev libtiff5-dev libjpeg-dev \
make gcc g++ pandoc python3 python3-pip

# Install R via rig
RUN curl -L https://rig.r-pkg.org/deb/rig.gpg -o /etc/apt/trusted.gpg.d/rig.gpg \
&& echo "deb http://rig.r-pkg.org/deb rig main" > /etc/apt/sources.list.d/rig.list \
&& apt-get update \
&& apt-get install -y r-rig \
&& rig add release \
&& rig default release \
&& rm -rf /var/lib/apt/lists/*

# Use a fixed library path (not HOME-dependent) so packages are found
# regardless of what HOME is set to at runtime (GitHub Actions sets HOME=/github/home)
ENV R_LIBS_USER=/opt/R/library
RUN mkdir -p /opt/R/library

# Parallel compilation
RUN echo "MAKEFLAGS=-j$(nproc)" >> "$(R RHOME)/etc/Renviron.site"

# Copy source
COPY . /build

ARG CUML_VERSION=21.12
ENV CUML_VERSION=${CUML_VERSION}

# Cross-compile for T4 GPU (compute capability 7.5) since build runner has no GPU
ARG CMAKE_CUDA_ARCHITECTURES=75
ENV CMAKE_CUDA_ARCHITECTURES=${CMAKE_CUDA_ARCHITECTURES}

ENV NOT_CRAN=true

# Install R dependencies
RUN Rscript -e "\
install.packages('pak', repos = 'https://r-lib.github.io/p/pak/devel/'); \
pak::local_install_deps('/build', dependencies = TRUE)" \
&& rm -rf /tmp/* /root/.cache

# Install cuda.ml with tests
RUN R CMD INSTALL --install-tests /build

# Clean up
RUN rm -rf /tmp/* /build
164 changes: 72 additions & 92 deletions .github/workflows/R-CMD-check.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
# Workflow derived from https://github.com/r-lib/actions/tree/master/examples
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help
on:
push:
branches: [main]
Expand All @@ -9,117 +7,99 @@ on:
name: R-CMD-check

jobs:
R-CMD-check:

check-cran:
strategy:
fail-fast: false
matrix:
cuda: ['11.2.1']
cuml: ['21.08', '21.10', '21.12']
r: ['release', 'devel']
asan: ['false', 'true']

runs-on: ['self-hosted', 'gpu']
container:
image: nvidia/cuda:${{ matrix.cuda }}-devel-ubuntu18.04
options: --gpus all

name: 'R: ${{ matrix.r }}, CUDA: ${{ matrix.cuda }}, CUML: ${{ matrix.cuml }}, ASAN: ${{ matrix.asan }}'
runs-on: ubuntu-latest
name: 'CRAN (R: ${{ matrix.r }})'

env:
NOT_CRAN: true
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes
CUML_VERSION: ${{ matrix.cuml }}
CUML4R_ENABLE_ASAN: ${{ matrix.asan }}
DEBIAN_FRONTEND: 'noninteractive'

steps:
- run: |
apt-get update -y
apt-get install -y sudo software-properties-common dialog apt-utils tzdata
if [[ $CUML4R_ENABLE_ASAN == 'true' ]]; then
apt-get install -y libasan5
fi
shell: bash

- uses: actions/checkout@v2
- uses: actions/checkout@v4

- uses: r-lib/actions/setup-pandoc@v1
- uses: r-lib/actions/setup-pandoc@v2

- uses: actions/setup-python@v2
with:
python-version: '3.x'
architecture: 'x64'

- uses: r-lib/actions/setup-r@master
- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.r }}
http-user-agent: ${{ matrix.config.http-user-agent }}
use-public-rspm: true

- uses: r-lib/actions/setup-r-dependencies@v1
- uses: r-lib/actions/setup-r-dependencies@v2
with:
extra-packages: rcmdcheck
needs: check

- name: Build
run: R CMD build .

- name: Check
run: R CMD check --no-manual --as-cran cuda.ml_*.tar.gz
env:
_R_CHECK_CRAN_INCOMING_: false

build-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
timeout-minutes: 120
outputs:
image: ghcr.io/${{ github.repository }}-ci:${{ github.sha }}
steps:
- uses: actions/checkout@v4

- name: Build {cuda.ml}
id: build-pkg
run: |
cd ..
ls -a
rm -v cuda.ml_*.tar.gz
R CMD build cuda.ml
ls -a
echo "::set-output name=pkg-dir::$(pwd)"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: .github/docker/Dockerfile
push: true
tags: ghcr.io/${{ github.repository }}-ci:${{ github.sha }}
build-args: |
CUML_VERSION=21.12
CMAKE_CUDA_ARCHITECTURES=75

test-gpu:
needs: build-image
if: ${{ always() && needs.build-image.result == 'success' }}
concurrency:
group: gpu-tests
runs-on:
- "runs-on=${{ github.run_id }}/family=g4dn.xlarge/image=ubuntu24-gpu-x64/spot=true"
container:
image: ${{ needs.build-image.outputs.image }}
credentials:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
options: --gpus all --runtime=nvidia
timeout-minutes: 60
env:
NOT_CRAN: true

- run: cp -v cuda.ml/.lsan-suppressions.txt /tmp
working-directory: ${{ steps.build-pkg.outputs.pkg-dir }}
steps:
- name: Verify GPU access
run: nvidia-smi

- name: Check {cuda.ml} package
run: |
print(list.files("."))
pkg <- list.files(".", pattern = "cuda\\.ml_.*\\.tar\\.gz")
stopifnot(length(pkg) == 1)

reticulate::install_miniconda(force = TRUE)

rcmdcheck_env <- (
if (identical(Sys.getenv("CUML4R_ENABLE_ASAN"), "true")) {
c(
LD_PRELOAD = "/usr/lib/x86_64-linux-gnu/libasan.so.5",
ASAN_OPTIONS = "halt_on_error=0,new_delete_type_mismatch=0,alloc_dealloc_mismatch=0,protect_shadow_gap=0",
LSAN_OPTIONS = "suppressions=/tmp/.lsan-suppressions.txt"
)
} else {
character()
}
)
rcmdcheck::rcmdcheck(
path = pkg[[1]],
args = c("--no-manual", "--as-cran"),
check_dir="check",
env = rcmdcheck_env
)
shell: Rscript {0}
working-directory: ${{ steps.build-pkg.outputs.pkg-dir }}

- name: Show testthat output
if: ${{ always() }}
- name: Session info
run: |
find check -name 'testthat.Rout*' -type f -exec cat '{}' \; || :
shell: bash
working-directory: ${{ steps.build-pkg.outputs.pkg-dir }}
Rscript -e "sessionInfo()"
Rscript -e "library(cuda.ml)"

- name: Check for sanitizer error(s)
if: ${{ always() }}
- name: Run tests
run: |
! find check -name 'testthat.Rout*' -type f -exec egrep -C 50 'ERROR: .*Sanitizer:' '{}' +
shell: bash
working-directory: ${{ steps.build-pkg.outputs.pkg-dir }}

- name: Upload check results
if: ${{ failure() }}
uses: actions/upload-artifact@main
with:
name: ${{ runner.os }}-r${{ matrix.r }}-results
path: ${{ steps.build-pkg.outputs.pkg-dir }}/check
Rscript -e "testthat::test_package('cuda.ml', reporter = 'progress')"
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,5 @@ cuda.ml.Rcheck
*.cmake
*.a
00check.log
.positai
.codex
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Suggests:
xgboost
LinkingTo: Rcpp
Encoding: UTF-8
RoxygenNote: 7.1.2
RoxygenNote: 7.3.3
OS_type: unix
SystemRequirements: RAPIDS cuML (see https://rapids.ai/start.html)
NeedsCompilation: yes
21 changes: 21 additions & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,12 +1,24 @@
# Generated by roxygen2: do not edit by hand

S3method(cuda_ml_can_predict_class_probabilities,cuda_ml_fil)
S3method(cuda_ml_can_predict_class_probabilities,cuda_ml_knn)
S3method(cuda_ml_can_predict_class_probabilities,cuda_ml_model)
S3method(cuda_ml_can_predict_class_probabilities,cuda_ml_rand_forest)
S3method(cuda_ml_can_predict_class_probabilities,default)
S3method(cuda_ml_elastic_net,data.frame)
S3method(cuda_ml_elastic_net,default)
S3method(cuda_ml_elastic_net,formula)
S3method(cuda_ml_elastic_net,matrix)
S3method(cuda_ml_elastic_net,recipe)
S3method(cuda_ml_get_state,cuda_ml_model)
S3method(cuda_ml_get_state,cuda_ml_pca)
S3method(cuda_ml_get_state,cuda_ml_rand_forest)
S3method(cuda_ml_get_state,cuda_ml_rand_proj_model)
S3method(cuda_ml_get_state,cuda_ml_svc)
S3method(cuda_ml_get_state,cuda_ml_svc_ovr)
S3method(cuda_ml_get_state,cuda_ml_svr)
S3method(cuda_ml_get_state,cuda_ml_umap)
S3method(cuda_ml_get_state,default)
S3method(cuda_ml_inverse_transform,cuda_ml_pca)
S3method(cuda_ml_inverse_transform,cuda_ml_tsvd)
S3method(cuda_ml_is_classifier,cuda_ml_model)
Expand Down Expand Up @@ -43,6 +55,15 @@ S3method(cuda_ml_ridge,matrix)
S3method(cuda_ml_ridge,recipe)
S3method(cuda_ml_serialize,cuda_ml_model)
S3method(cuda_ml_serialize,default)
S3method(cuda_ml_set_state,cuda_ml_model_state)
S3method(cuda_ml_set_state,cuda_ml_pca_model_state)
S3method(cuda_ml_set_state,cuda_ml_rand_forest_model_state)
S3method(cuda_ml_set_state,cuda_ml_rand_proj_model_state)
S3method(cuda_ml_set_state,cuda_ml_svc_model_state)
S3method(cuda_ml_set_state,cuda_ml_svc_ovr_model_state)
S3method(cuda_ml_set_state,cuda_ml_svr_model_state)
S3method(cuda_ml_set_state,cuda_ml_umap_model_state)
S3method(cuda_ml_set_state,default)
S3method(cuda_ml_sgd,data.frame)
S3method(cuda_ml_sgd,default)
S3method(cuda_ml_sgd,formula)
Expand Down
4 changes: 2 additions & 2 deletions R/agglomerative.R
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ agglomerative_clustering_match_metric <- function(metric = c("euclidean", "l1",
#' @template model-with-numeric-input
#' @param n_clusters The number of clusters to find. Default: 2L.
#' @param metric Metric used for linkage computation. Must be one of
#' {"euclidean", "l1", "l2", "manhattan", "cosine"}. If connectivity is
#' \{"euclidean", "l1", "l2", "manhattan", "cosine"\}. If connectivity is
#' "knn" then only "euclidean" is accepted. Default: "euclidean".
#' @param connectivity The type of connectivity matrix to compute. Must be one
#' of {"pairwise", "knn"}. Default: "pairwise".
#' of \{"pairwise", "knn"\}. Default: "pairwise".
#' - 'pairwise' will compute the entire fully-connected graph of pairwise
#' distances between each set of points. This is the fastest to compute
#' and can be very fast for smaller datasets but requires O(n^2) space.
Expand Down
16 changes: 8 additions & 8 deletions R/cuml_utils.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#' Determine whether {cuda.ml} was linked to a valid version of the RAPIDS cuML
#' Determine whether \{cuda.ml\} was linked to a valid version of the RAPIDS cuML
#' shared library.
#'
#' @return A logical value indicating whether the current installation {cuda.ml}
#' @return A logical value indicating whether the current installation \{cuda.ml\}
#' was linked to a valid version of the RAPIDS cuML shared library.
#'
#' @examples
Expand All @@ -17,11 +17,11 @@
#' @export
has_cuML <- .has_cuML

#' Get the major version of the RAPIDS cuML shared library {cuda.ml} was linked
#' Get the major version of the RAPIDS cuML shared library \{cuda.ml\} was linked
#' to.
#'
#' @return The major version of the RAPIDS cuML shared library {cuda.ml} was
#' linked to in a character vector, or \code{NA_character_} if {cuda.ml} was not
#' @return The major version of the RAPIDS cuML shared library \{cuda.ml\} was
#' linked to in a character vector, or \code{NA_character_} if \{cuda.ml\} was not
#' linked to any version of RAPIDS cuML.
#'
#' @examples
Expand All @@ -32,11 +32,11 @@ has_cuML <- .has_cuML
#' @export
cuML_major_version <- .cuML_major_version

#' Get the minor version of the RAPIDS cuML shared library {cuda.ml} was linked
#' Get the minor version of the RAPIDS cuML shared library \{cuda.ml\} was linked
#' to.
#'
#' @return The minor version of the RAPIDS cuML shared library {cuda.ml} was
#' linked to in a character vector, or \code{NA_character_} if {cuda.ml} was not
#' @return The minor version of the RAPIDS cuML shared library \{cuda.ml\} was
#' linked to in a character vector, or \code{NA_character_} if \{cuda.ml\} was not
#' linked to any version of RAPIDS cuML.
#'
#' @examples
Expand Down
8 changes: 4 additions & 4 deletions R/fil.R
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
#' Determine whether Forest Inference Library (FIL) functionalities are enabled
#' in the current installation of {cuda.ml}.
#' in the current installation of \{cuda.ml\}.
#'
#' CuML Forest Inference Library (FIL) functionalities (see
#' https://github.com/rapidsai/cuml/tree/main/python/cuml/fil#readme) will
#' require Treelite C API. If you need FIL to run tree-based model ensemble on
#' GPU, and \code{fil_enabled()} returns FALSE, then please consider installing
#' Treelite and then re-installing {cuda.ml}.
#' Treelite and then re-installing \{cuda.ml\}.
#'
#' @return A logical value indicating whether the Forest Inference Library (FIL)
#' functionalities are enabled.
Expand Down Expand Up @@ -62,9 +62,9 @@ file_match_storage_type <- function(storage_type = c("auto", "dense", "sparse"))
#'
#' @param filename Path to the saved model file.
#' @param mode Type of task to be performed by the model. Must be one of
#' {"classification", "regression"}.
#' \{"classification", "regression"\}.
#' @param model_type Format of the saved model file. Notice if \code{filename}
#' ends with ".json" and \code{model_type} is "xgboost", then {cuda.ml} will
#' ends with ".json" and \code{model_type} is "xgboost", then \{cuda.ml\} will
#' assume the model file is in XGBoost JSON (instead of binary) format.
#' Default: "xgboost".
#' @param algo Type of the algorithm for inference, must be one of the
Expand Down
Loading
Loading