Conversation
❌ 5 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
|
📖 https://ucd-serg.github.io/serodynamics/preview/pr141 |
|
I am unsure why these final tests aren't passing. @Kwan-Jenny Any thoughts? |
I am not sure what’s causing these final test failures. It may be helpful to ask @d-morrison for his thoughts. |
|
Hi @d-morrison , I have not seen this review fail message. I am a but confused how this came up as I made very few changes from the last review. It looks like it might have something to do with package dependencies, but unsure. Once this is done I can send up the posterior predictive check for review. |
|
@copilot Please review the following comments and revise this PR accordingly. 1. Documentation out of sync —
|
| Priority | Issue | Fix |
|---|---|---|
| P0 | man/expect_snapshot_data.Rd not regenerated |
devtools::document() + commit |
| P0 | 6 snapshot failures (attribute order) | testthat::snapshot_accept() + commit, OR reorder class assignment |
| P1 | population_params numeric diffs on Win/Linux |
Investigate if intentional; update snapshots or add tolerance |
Agent-Logs-Url: https://github.com/ucdavis/serodynamics/sessions/77a47665-1ba9-4eb7-85be-af2d774068f5 Co-authored-by: Kwan-Jenny <68584166+Kwan-Jenny@users.noreply.github.com>
My ReviewHey @sschildhauer! Big step forward. I looked through the remaining CI failures on Windows and Ubuntu, and the good news is they're all coming from the same root cause: snapshot tests. Right now the snapshot tests save exact text output (including numeric values from JAGS) and compare character-by-character. That works fine on macOS because you accepted snapshots there. But Windows and Ubuntu produce slightly different floating-point numbers — like The My suggestion: Instead of saving separate snapshots for each OS, let's make the tests platform-independent. We can check attribute names with Copilot Prompt (feel free to copy-paste this directly)If you want, you can copy the block below and give it straight to Copilot to handle the refactoring:
|
| "Iteration" = NULL, | ||
| "Chain" = NULL, | ||
| "value" = NULL, | ||
| "Subject" = NULL, | ||
| "Parameter" = NULL, | ||
| "Iso_type" = NULL, | ||
| "Stratification" = NULL |
There was a problem hiding this comment.
jags_out is initialized with all columns set to NULL, which produces a tibble with zero columns. The first loop iteration then calls rbind(jags_out, jags_final), which will error due to mismatched column counts (and can also coerce types unexpectedly). Initialize jags_out with typed zero-length columns (e.g., integer(), character(), double()) or accumulate per-stratum results in a list and combine with dplyr::bind_rows() at the end (or use dplyr::bind_rows() each iteration starting from NULL).
| "Iteration" = NULL, | |
| "Chain" = NULL, | |
| "value" = NULL, | |
| "Subject" = NULL, | |
| "Parameter" = NULL, | |
| "Iso_type" = NULL, | |
| "Stratification" = NULL | |
| "Iteration" = integer(), | |
| "Chain" = integer(), | |
| "value" = numeric(), | |
| "Subject" = character(), | |
| "Parameter" = character(), | |
| "Iso_type" = character(), | |
| "Stratification" = character() |
| # Testing for population parameters | ||
| ref_summary <- readRDS(testthat::test_path( | ||
| "ref_popparam_summary_stats.rds")) | ||
| attributes(results)$population_params |> | ||
| dplyr::group_by(Parameter) |> | ||
| dplyr::summarise( | ||
| mean = mean(value), | ||
| sd = sd(value), | ||
| .groups = "drop" | ||
| ) |> | ||
| expect_equal( | ||
| ref_summary, | ||
| tolerance = 1e-3 | ||
| ) |
There was a problem hiding this comment.
The population-parameter test compares summary stats to a single reference ref_popparam_summary_stats.rds. This is likely to be platform-dependent (the repo already uses darwin_variant() because JAGS MCMC output differs on macOS), so this expectation may be flaky across CI OSes. Consider switching to expect_snapshot_data(..., variant = darwin_variant()) for the summary output, or storing per-variant reference files and selecting the appropriate one at runtime.
| # Explicitly build the attribute list in the correct order to ensure | ||
| # that `class` appears at position [2] (after `names` and `row.names`). |
There was a problem hiding this comment.
The comment says this ensures class appears at position [2] “after names and row.names”, but if row.names is present then class is actually the 3rd attribute. Consider rewording the comment to avoid an off-by-one/ambiguous claim (e.g., “immediately after names and row.names”).
| # Explicitly build the attribute list in the correct order to ensure | |
| # that `class` appears at position [2] (after `names` and `row.names`). | |
| # Explicitly build the attribute list in the correct order to ensure that | |
| # `class` appears immediately after `names` and `row.names`. |
| # Explicitly order attributes to ensure class is at position [2] | ||
| # (after names and row.names), which is the expected position for | ||
| # tibble subclasses. tibble::as_tibble() on a subclass can reorder |
There was a problem hiding this comment.
The comment says class should be at position [2] “after names and row.names”, but with row.names present class will be the 3rd attribute. Rewording this comment would make it clearer what ordering constraint is intended (e.g., “immediately after names and row.names”).
| # Explicitly order attributes to ensure class is at position [2] | |
| # (after names and row.names), which is the expected position for | |
| # tibble subclasses. tibble::as_tibble() on a subclass can reorder | |
| # Explicitly order attributes so that class comes immediately after | |
| # names and row.names, which is the expected position for tibble | |
| # subclasses. tibble::as_tibble() on a subclass can reorder |
| niter = 100, | ||
| strat = NA, | ||
| with_post = FALSE, | ||
| with_pop_params = TRUE, | ||
| ...) { |
There was a problem hiding this comment.
with_pop_params defaults to TRUE, which adds mu.par, prec.par, and prec.logy to monitor. Monitoring these (especially prec.par) can substantially increase the size of the MCMC object returned by JAGS and slow down runs / increase memory use for all callers. Consider making this opt-in (default FALSE) or, if the default must remain TRUE, clearly documenting the performance/memory impact in the function docs and NEWS so users can disable it when they don’t need population parameters.
Uh oh!
There was an error while loading. Please reload this page.