From ec75f2a5a2d250e4e73c03723cd9b5f9b408ee58 Mon Sep 17 00:00:00 2001 From: jepusto Date: Sat, 20 Sep 2025 15:34:57 -0500 Subject: [PATCH 01/10] Fixed typo. --- 020-Data-generating-models.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/020-Data-generating-models.Rmd b/020-Data-generating-models.Rmd index 96f7fa1..2bbd962 100644 --- a/020-Data-generating-models.Rmd +++ b/020-Data-generating-models.Rmd @@ -238,7 +238,7 @@ Here is a plot of 30 observations from the bivariate Poisson distribution with m ```{r bivariate-Poisson-scatter} #| echo: false #| message: false -#| fig.cap: "$N = 30$ observations from the bivariate Poisson distribution with $\\mu_1 = 10, \\mu_2 = 7, \rho = .65$." +#| fig.cap: "$N = 30$ observations from the bivariate Poisson distribution with $\\mu_1 = 10, \\mu_2 = 7, \\rho = .65$." #| fig.width: 6 #| fig.height: 4 From 1177c31d071d533b24b0ad7a320087efd9d21623 Mon Sep 17 00:00:00 2001 From: jepusto Date: Mon, 13 Oct 2025 12:20:57 -0500 Subject: [PATCH 02/10] Fixed a typo on Chapter 020. --- 020-Data-generating-models.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/020-Data-generating-models.Rmd b/020-Data-generating-models.Rmd index 2bbd962..68178ac 100644 --- a/020-Data-generating-models.Rmd +++ b/020-Data-generating-models.Rmd @@ -830,7 +830,7 @@ Another model for generating bivariate counts with negative binomial marginal di $$ \left(\begin{array}{c}Z_1 \\ Z_2 \end{array}\right) \sim N\left(\left[\begin{array}{c}0 \\ 0\end{array}\right], \ \left[\begin{array}{cc}1 & \rho \\ \rho & 1\end{array}\right]\right) $$ -Now find $U_1 = \Phi(Z_1)$ and $U_1 = \Phi(Z_1)$, where $\Phi()$ is the standard normal cumulative distribution function (called `pnorm()` in R). +Now find $U_1 = \Phi(Z_1)$ and $U_2 = \Phi(Z_2)$, where $\Phi()$ is the standard normal cumulative distribution function (called `pnorm()` in R). Then generate the counts by evaluating $U_1$ and $U_2$ with the negative binomial quantile function, $F_{NB}^{-1}(x | \mu, p)$ with mean parameters $\mu$ and size parameter $p$ (this function is called `qnbinom()` in R): $$ C_1 = F_{NB}^{-1}(U_1 | \mu_1, p_1) \qquad C_2 = F_{NB}^{-1}(U_2 | \mu_2, p_2). From a3fb6fff6fbf8b69889716ce804cbec650f5eefc Mon Sep 17 00:00:00 2001 From: jepusto Date: Wed, 15 Oct 2025 08:13:04 -0500 Subject: [PATCH 03/10] Fixed bibtex so that the pdf compiles. --- 001-introduction.Rmd | 4 +- 020-Data-generating-models.Rmd | 2 +- 072-presentation-of-results.Rmd | 4 +- 080-simulations-as-evidence.Rmd | 6 +- Designing-Simulations-in-R.toc | 315 +++++++++--------- .../clusterRCT_plot_bias_v1-1.pdf | Bin 7984 -> 7984 bytes .../clusterRCT_plot_bias_v2-1.pdf | Bin 8546 -> 8546 bytes .../figure-latex/disc_mde-1.pdf | Bin 10563 -> 10563 bytes .../figure-latex/disc_power-1.pdf | Bin 19481 -> 19481 bytes .../figure-latex/disc_precision-1.pdf | Bin 10111 -> 10111 bytes .../figure-latex/swan_example_setup-1.pdf | Bin 18905 -> 18905 bytes .../figure-latex/ttest_result_figure-1.pdf | Bin 5921 -> 5921 bytes .../figure-latex/unnamed-chunk-2-1.pdf | Bin 5342 -> 5336 bytes book.bib | 308 +++-------------- sec | 0 15 files changed, 217 insertions(+), 422 deletions(-) create mode 100644 sec diff --git a/001-introduction.Rmd b/001-introduction.Rmd index 28acb80..4fc3351 100644 --- a/001-introduction.Rmd +++ b/001-introduction.Rmd @@ -97,7 +97,7 @@ For example, this is of particular concern with hierarchical data structures tha Simulation is a tractable approach for assessing the small-sample performance of such estimation methods or for determining minimum required sample sizes for adequate performance. One example of a simulation investigating questions of finite-sample behavior comes from @longUsingHeteroscedasticityConsistent2000, whose evaluated the performance of heteroskedasticity-robust standard errors (HRSE) in linear regression models. -Asymptotic analysis indicates that HRSEs work well (in the sense of providing correct assessments of uncertainty) in sufficiently large samples (@White1980heteroskedasticity), but what about in realistic contexts where small samples occur? +Asymptotic analysis indicates that HRSEs work well (in the sense of providing correct assessments of uncertainty) in sufficiently large samples [@White1980heteroskedasticity], but what about in realistic contexts where small samples occur? @longUsingHeteroscedasticityConsistent2000 use extensive simulations to investigate the properties of different versions of HRSEs for linear regression across a range of sample sizes, demonstrating that the most commonly used form of these estimators often does _not_ work well with sample sizes found in typical social science applications. Via simulation, they provided compelling evidence about a problem without having to wade into a technical (and potentially inaccessible) mathematical analysis of the problem. @@ -161,7 +161,7 @@ Even this strategy has limitations, though. Except for very simple processes, we can seldom consider every possible set of conditions. As we will see in later chapters, the design of a simulation study typically entails making choices over very large spaces of possibility. -This flexibility leaves lots of room for discretion and judgement, and even for personal or professional biases [@boulesteix2020Replicationa]. +This flexibility leaves lots of room for discretion and judgement, and even for personal or professional biases [@boulesteix2020Replication]. Due to this flexibility, simulation findings are held in great skepticism by many. The following motto summarizes the skeptic's concern: diff --git a/020-Data-generating-models.Rmd b/020-Data-generating-models.Rmd index 68178ac..02897b1 100644 --- a/020-Data-generating-models.Rmd +++ b/020-Data-generating-models.Rmd @@ -310,7 +310,7 @@ Even simple checks such as these can be quite helpful in catching such bugs. Writing code for a complicated DGP can feel like a daunting task, but if you first focus on a recipe for how the data is generated, it is often not too bad to then convert that recipe into code. We now illustrate this process with a detailed case study involving a more complex data-generating process -Recent literature on multisite trials (where, for example, students are randomized to treatment or control within each of a series of sites) has explored how variation in the strength of effects across sites can affect how different data-analysis procedures behave [e.g., @miratrix2021applied; @Bloom:2016um]. +Recent literature on multisite trials (where, for example, students are randomized to treatment or control within each of a series of sites) has explored how variation in the strength of effects across sites can affect how different data-analysis procedures behave [e.g., @miratrix2021applied; @Bloom2016using]. In this example, we are going to extend this work to explore best practices for estimating treatment effects in cluster randomized trials. In particular, we will investigate what happens when the treatment impact for each school is related to the size of the school. diff --git a/072-presentation-of-results.Rmd b/072-presentation-of-results.Rmd index fcf5345..a1e2ad6 100644 --- a/072-presentation-of-results.Rmd +++ b/072-presentation-of-results.Rmd @@ -568,8 +568,8 @@ Simulations are designed experiments, often with a full factorial structure. The results are datasets in their own right, just as if we had collected data in the wild. We can therefore leverage classic means for analyzing such full factorial experiments. For example, we can regress a performance measure against our factor levels to get the "main effects" of how the different levels impact performance, holding the other levels constant. -This type of regression is called a "meta regression" (@kleijnen1981regression, @friedman1988metamodel, @gilbert2024multilevel), as we are regressing on already processed results. -It also has ties to meta analysis (see, e.g., @borenstein2021introduction), where we look for trends across sets of experiments. +This type of regression is called a "meta regression" [@kleijnen1981regression; @friedman1988metamodel; @gilbert2024multilevel], as we are regressing on already processed results. +It also has ties to meta analysis [see, e.g., @borenstein2021introduction], where we look for trends across sets of experiments. In the language of a full factor experiment, we might be interested in the "main effects" and the "interaction effects." A main effect is whether, averaging across the other factors in our experiment, a factor of interest systematically impacts performance. diff --git a/080-simulations-as-evidence.Rmd b/080-simulations-as-evidence.Rmd index c85ca22..d840551 100644 --- a/080-simulations-as-evidence.Rmd +++ b/080-simulations-as-evidence.Rmd @@ -40,14 +40,14 @@ In the following subsections we go through a range of general strategies for mak ### Break symmetries and regularities -In a series of famous causal inference papers (@lin2013agnostic, @freedman2008regression), researchers examined when linear regression adjustment of a randomized experiment (i.e., when controlling for baseline covariates in a randomized experiment) could cause problems. +In a series of famous causal inference papers [@lin2013agnostic; @freedman2008regression], researchers examined when linear regression adjustment of a randomized experiment (i.e., when controlling for baseline covariates in a randomized experiment) could cause problems. Critically, if the treatment assignment is 50%, then the concerns that these researchers examined do not come into play, as asymmetries between the two groups gets perfectly cancelled out. That said, if the treatment proportion is more lopsided, then under some circumstances you can get bias, and you can get invalid standard errors, depending on other structures of the data. Simulations can be used to explore these issues, but only if we break the symmetry of the 50% treatment assignment. When designing simulations, it is worth looking for places of symmetry, because in those contexts estimators will often work better than they might otherwise, and other factors may not have as much of an effect as anticipated. -Similarly, in recent work on best practices for analyzing multisite experiments (@miratrix2021applied), we identified how different estimators could be targeting different estimands. +Similarly, in recent work on best practices for analyzing multisite experiments [@miratrix2021applied], we identified how different estimators could be targeting different estimands. In particular, some estimators target site-average treatment effects, some target person-average treatment effects, and some target a kind of precision-weighted blend of the two. To see this play out in practice, our simulations needed the sizes of sites to vary, and also the proportion of treated within site to vary. If we had run simulations with equal site size and equal proportion treated, we would not see the broader behavior that separates the estimators considered. @@ -114,7 +114,7 @@ It is very easy to accidentally put a very simple model in place for this final We next walk through how you might calibrate further in the context of evaluating estimators for some sort of causal inference context where we are assessing methods of estimating a treatment effect of some binary treatment. If we just resample our covariates, but then layer a constant treatment effect on top, we may be missing critical aspects of how our estimators might fail in practice. -In the area of causal inference, the potential outcomes framework provides a natural path for generating calibrated simulations [@Kern_calibrated]. +In the area of causal inference, the potential outcomes framework provides a natural path for generating calibrated simulations [@Kern2014calibrated]. Also see \@ref(potential-outcomes) for more discussion of simulations in the potential outcomes framework. Under this framework, we would take an existing randomized experiment or observational study and then impute all the missing potential outcomes under some specific scheme. This fully defines the sample of interest and thus any target parameters, such as a measure of heterogeneity, are then fully known. diff --git a/Designing-Simulations-in-R.toc b/Designing-Simulations-in-R.toc index 748da91..b7aeb65 100644 --- a/Designing-Simulations-in-R.toc +++ b/Designing-Simulations-in-R.toc @@ -51,6 +51,13 @@ \contentsline {section}{\numberline {5.3}Running the simulation}{71}{section.5.3}% \contentsline {section}{\numberline {5.4}Summarizing test performance}{72}{section.5.4}% \contentsline {section}{\numberline {5.5}Exercises}{74}{section.5.5}% +\contentsline {subsection}{\numberline {5.5.1}Other \(\alpha \)'s}{74}{subsection.5.5.1}% +\contentsline {subsection}{\numberline {5.5.2}Compare results}{74}{subsection.5.5.2}% +\contentsline {subsection}{\numberline {5.5.3}Power}{75}{subsection.5.5.3}% +\contentsline {subsection}{\numberline {5.5.4}Wide or long?}{75}{subsection.5.5.4}% +\contentsline {subsection}{\numberline {5.5.5}Other tests}{76}{subsection.5.5.5}% +\contentsline {subsection}{\numberline {5.5.6}Methodological extensions}{76}{subsection.5.5.6}% +\contentsline {subsection}{\numberline {5.5.7}Power analysis}{76}{subsection.5.5.7}% \contentsline {chapter}{\numberline {6}Data-generating processes}{77}{chapter.6}% \contentsline {section}{\numberline {6.1}Examples}{77}{section.6.1}% \contentsline {subsection}{\numberline {6.1.1}Example 1: One-way analysis of variance}{78}{subsection.6.1.1}% @@ -114,162 +121,162 @@ \contentsline {subsection}{\numberline {8.5.4}Fancy clustered RCT simulations}{153}{subsection.8.5.4}% \contentsline {chapter}{\numberline {9}Performance metrics}{155}{chapter.9}% \contentsline {section}{\numberline {9.1}Metrics for Point Estimators}{157}{section.9.1}% -\contentsline {subsection}{\numberline {9.1.1}Comparing the Performances of the Cluster RCT Estimation Procedures}{159}{subsection.9.1.1}% +\contentsline {subsection}{\numberline {9.1.1}Comparing the Performance of the Cluster RCT Estimation Procedures}{159}{subsection.9.1.1}% \contentsline {subsubsection}{Are the estimators biased?}{160}{section*.12}% \contentsline {subsubsection}{Which method has the smallest standard error?}{161}{section*.13}% \contentsline {subsubsection}{Which method has the smallest Root Mean Squared Error?}{161}{section*.14}% \contentsline {subsection}{\numberline {9.1.2}Less Conventional Performance metrics}{162}{subsection.9.1.2}% \contentsline {section}{\numberline {9.2}Metrics for Standard Error Estimators}{164}{section.9.2}% -\contentsline {subsection}{\numberline {9.2.1}Assessing SEs for Our Cluster RCT Simulation}{166}{subsection.9.2.1}% -\contentsline {section}{\numberline {9.3}Metrics for Confidence Intervals}{167}{section.9.3}% -\contentsline {subsection}{\numberline {9.3.1}Confidence Intervals in our Cluster RCT Example}{168}{subsection.9.3.1}% -\contentsline {section}{\numberline {9.4}Metrics for Inferential Procedures (Hypothesis Tests)}{168}{section.9.4}% -\contentsline {subsection}{\numberline {9.4.1}Validity}{168}{subsection.9.4.1}% -\contentsline {subsection}{\numberline {9.4.2}Power}{169}{subsection.9.4.2}% -\contentsline {subsection}{\numberline {9.4.3}The Rejection Rate}{169}{subsection.9.4.3}% -\contentsline {subsection}{\numberline {9.4.4}Inference in our Cluster RCT Simulation}{170}{subsection.9.4.4}% -\contentsline {section}{\numberline {9.5}Selecting Relative vs.~Absolute Metrics}{171}{section.9.5}% -\contentsline {section}{\numberline {9.6}Summary of Peformance Measures}{173}{section.9.6}% -\contentsline {subsection}{\numberline {9.6.1}Windsorization to control outliers}{179}{subsection.9.6.1}% -\contentsline {subsection}{\numberline {9.6.2}Correlation measures vs absolute performance}{181}{subsection.9.6.2}% -\contentsline {section}{\numberline {9.7}Summary of Peformance Measures}{183}{section.9.7}% -\contentsline {section}{\numberline {9.8}Estimands Not Represented By a Parameter}{184}{section.9.8}% -\contentsline {section}{\numberline {9.9}Uncertainty in Performance Estimates (the Monte Carlo Standard Error)}{187}{section.9.9}% -\contentsline {subsection}{\numberline {9.9.1}MCSE for Relative Variance Estimators}{188}{subsection.9.9.1}% -\contentsline {subsection}{\numberline {9.9.2}Calculating MCSEs With the \texttt {simhelpers} Package}{189}{subsection.9.9.2}% -\contentsline {subsection}{\numberline {9.9.3}MCSE Calculation in our Cluster RCT Example}{191}{subsection.9.9.3}% -\contentsline {section}{\numberline {9.10}Concluding thoughts}{191}{section.9.10}% -\contentsline {section}{\numberline {9.11}Exercises}{192}{section.9.11}% -\contentsline {subsection}{\numberline {9.11.1}Brown and Forsythe (1974)}{192}{subsection.9.11.1}% -\contentsline {subsection}{\numberline {9.11.2}Jackknife calculation of MCSEs}{192}{subsection.9.11.2}% -\contentsline {subsection}{\numberline {9.11.3}Distribution theory for person-level average treatment effects}{192}{subsection.9.11.3}% -\contentsline {subsection}{\numberline {9.11.4}Multiple scenarios}{192}{subsection.9.11.4}% -\contentsline {part}{III\hspace {1em}Multifactor Simulations}{195}{part.3}% -\contentsline {chapter}{\numberline {10}Designing and executing multifactor simulations}{197}{chapter.10}% -\contentsline {section}{\numberline {10.1}Choosing parameter combinations}{199}{section.10.1}% -\contentsline {section}{\numberline {10.2}Using pmap to run multifactor simulations}{201}{section.10.2}% -\contentsline {section}{\numberline {10.3}When to calculate performance metrics}{206}{section.10.3}% -\contentsline {subsection}{\numberline {10.3.1}Aggregate as you simulate (inside)}{206}{subsection.10.3.1}% -\contentsline {subsection}{\numberline {10.3.2}Keep all simulation runs (outside)}{206}{subsection.10.3.2}% -\contentsline {subsection}{\numberline {10.3.3}Getting raw results ready for analysis}{208}{subsection.10.3.3}% -\contentsline {section}{\numberline {10.4}Summary}{210}{section.10.4}% -\contentsline {section}{\numberline {10.5}Case Study: A multifactor evaluation of cluster RCT estimators}{211}{section.10.5}% -\contentsline {subsection}{\numberline {10.5.1}Choosing parameters for the Clustered RCT}{211}{subsection.10.5.1}% -\contentsline {subsection}{\numberline {10.5.2}Redundant factor combinations}{213}{subsection.10.5.2}% -\contentsline {subsection}{\numberline {10.5.3}Running the simulations}{213}{subsection.10.5.3}% -\contentsline {subsection}{\numberline {10.5.4}Calculating performance metrics}{214}{subsection.10.5.4}% -\contentsline {section}{\numberline {10.6}Exercises}{216}{section.10.6}% -\contentsline {subsection}{\numberline {10.6.1}Brown and Forsythe redux}{216}{subsection.10.6.1}% -\contentsline {subsection}{\numberline {10.6.2}Meta-regression}{216}{subsection.10.6.2}% -\contentsline {subsection}{\numberline {10.6.3}Comparing the trimmed mean, median and mean}{216}{subsection.10.6.3}% -\contentsline {chapter}{\numberline {11}Exploring and presenting simulation results}{219}{chapter.11}% -\contentsline {section}{\numberline {11.1}Tabulation}{220}{section.11.1}% -\contentsline {subsection}{\numberline {11.1.1}Example: estimators of treatment variation}{222}{subsection.11.1.1}% -\contentsline {section}{\numberline {11.2}Visualization}{223}{section.11.2}% -\contentsline {subsection}{\numberline {11.2.1}Example 0: RMSE in Cluster RCTs}{224}{subsection.11.2.1}% -\contentsline {subsection}{\numberline {11.2.2}Example 1: Biserial correlation estimation}{225}{subsection.11.2.2}% -\contentsline {subsection}{\numberline {11.2.3}Example 2: Variance estimation and Meta-regression}{226}{subsection.11.2.3}% -\contentsline {subsection}{\numberline {11.2.4}Example 3: Heat maps of coverage}{226}{subsection.11.2.4}% -\contentsline {subsection}{\numberline {11.2.5}Example 4: Relative performance of treatment effect estimators}{228}{subsection.11.2.5}% -\contentsline {section}{\numberline {11.3}Modeling}{229}{section.11.3}% -\contentsline {subsection}{\numberline {11.3.1}Example 1: Biserial, revisited}{230}{subsection.11.3.1}% -\contentsline {subsection}{\numberline {11.3.2}Example 2: Comparing methods for cross-classified data}{231}{subsection.11.3.2}% -\contentsline {section}{\numberline {11.4}Reporting}{233}{section.11.4}% -\contentsline {chapter}{\numberline {12}Building good visualizations}{235}{chapter.12}% -\contentsline {section}{\numberline {12.1}Subsetting and Many Small Multiples}{236}{section.12.1}% -\contentsline {section}{\numberline {12.2}Bundling}{239}{section.12.2}% -\contentsline {section}{\numberline {12.3}Aggregation}{242}{section.12.3}% -\contentsline {subsubsection}{\numberline {12.3.0.1}A note on how to aggregate}{244}{subsubsection.12.3.0.1}% -\contentsline {section}{\numberline {12.4}Assessing true SEs}{245}{section.12.4}% -\contentsline {subsubsection}{\numberline {12.4.0.1}Standardizing to compare across simulation scenarios}{247}{subsubsection.12.4.0.1}% -\contentsline {section}{\numberline {12.5}The Bias-SE-RMSE plot}{251}{section.12.5}% -\contentsline {section}{\numberline {12.6}Assessing estimated SEs}{252}{section.12.6}% -\contentsline {section}{\numberline {12.7}Assessing confidence intervals}{255}{section.12.7}% -\contentsline {section}{\numberline {12.8}Exercises}{258}{section.12.8}% -\contentsline {subsection}{\numberline {12.8.1}Assessing uncertainty}{258}{subsection.12.8.1}% -\contentsline {subsection}{\numberline {12.8.2}Assessing power}{258}{subsection.12.8.2}% -\contentsline {subsection}{\numberline {12.8.3}Going deeper with coverage}{258}{subsection.12.8.3}% -\contentsline {subsection}{\numberline {12.8.4}Pearson correlations with a bivariate Poisson distribution}{258}{subsection.12.8.4}% -\contentsline {chapter}{\numberline {13}Special Topics on Reporting Simulation Results}{259}{chapter.13}% -\contentsline {section}{\numberline {13.1}Using regression to analyze simulation results}{259}{section.13.1}% -\contentsline {subsection}{\numberline {13.1.1}Example 1: Biserial, revisited}{259}{subsection.13.1.1}% -\contentsline {subsection}{\numberline {13.1.2}Example 2: Cluster RCT example, revisited}{262}{subsection.13.1.2}% -\contentsline {subsubsection}{\numberline {13.1.2.1}Using LASSO to simplify the model}{264}{subsubsection.13.1.2.1}% -\contentsline {subsubsection}{\numberline {13.1.2.2}Fitting models to each method}{267}{subsubsection.13.1.2.2}% -\contentsline {section}{\numberline {13.2}Using regression trees to find important factors}{271}{section.13.2}% -\contentsline {section}{\numberline {13.3}Analyzing results with few iterations per scenario}{273}{section.13.3}% -\contentsline {subsection}{\numberline {13.3.1}Example: ClusterRCT with only 100 replicates per scenario}{274}{subsection.13.3.1}% -\contentsline {section}{\numberline {13.4}What to do with warnings in simulations}{278}{section.13.4}% -\contentsline {chapter}{\numberline {14}Case study: Comparing different estimators}{283}{chapter.14}% -\contentsline {section}{\numberline {14.1}Bias-variance tradeoffs}{286}{section.14.1}% -\contentsline {chapter}{\numberline {15}Simulations as evidence}{291}{chapter.15}% -\contentsline {section}{\numberline {15.1}Strategies for making relevant simulations}{292}{section.15.1}% -\contentsline {subsection}{\numberline {15.1.1}Break symmetries and regularities}{292}{subsection.15.1.1}% -\contentsline {subsection}{\numberline {15.1.2}Make your simulation general with an extensive multi-factor experiment}{293}{subsection.15.1.2}% -\contentsline {subsection}{\numberline {15.1.3}Use previously published simulations to beat them at their own game}{293}{subsection.15.1.3}% -\contentsline {subsection}{\numberline {15.1.4}Calibrate simulation factors to real data}{293}{subsection.15.1.4}% -\contentsline {subsection}{\numberline {15.1.5}Use real data to obtain directly}{294}{subsection.15.1.5}% -\contentsline {subsection}{\numberline {15.1.6}Fully calibrated simulations}{294}{subsection.15.1.6}% -\contentsline {part}{IV\hspace {1em}Computational Considerations}{297}{part.4}% -\contentsline {chapter}{\numberline {16}Organizing a simulation project}{299}{chapter.16}% -\contentsline {section}{\numberline {16.1}Well structured R scripts}{300}{section.16.1}% -\contentsline {subsection}{\numberline {16.1.1}The source command}{300}{subsection.16.1.1}% -\contentsline {subsection}{\numberline {16.1.2}Putting headers in your .R file}{301}{subsection.16.1.2}% -\contentsline {subsection}{\numberline {16.1.3}Storing testing code in your scripts}{302}{subsection.16.1.3}% -\contentsline {section}{\numberline {16.2}Principled directory structures}{302}{section.16.2}% -\contentsline {section}{\numberline {16.3}Saving simulation results}{303}{section.16.3}% -\contentsline {subsection}{\numberline {16.3.1}Saving simulations in general}{303}{subsection.16.3.1}% -\contentsline {subsection}{\numberline {16.3.2}Saving simulations as you go}{304}{subsection.16.3.2}% -\contentsline {subsection}{\numberline {16.3.3}Dynamically making directories}{307}{subsection.16.3.3}% -\contentsline {subsection}{\numberline {16.3.4}Loading and combining files of simulation results}{308}{subsection.16.3.4}% -\contentsline {chapter}{\numberline {17}Parallel Processing}{309}{chapter.17}% -\contentsline {section}{\numberline {17.1}Parallel on your computer}{310}{section.17.1}% -\contentsline {section}{\numberline {17.2}Parallel on a virtual machine}{311}{section.17.2}% -\contentsline {section}{\numberline {17.3}Parallel on a cluster}{312}{section.17.3}% -\contentsline {subsection}{\numberline {17.3.1}What is a command-line interface?}{312}{subsection.17.3.1}% -\contentsline {subsection}{\numberline {17.3.2}Running a job on a cluster}{314}{subsection.17.3.2}% -\contentsline {subsection}{\numberline {17.3.3}Checking on a job}{316}{subsection.17.3.3}% -\contentsline {subsection}{\numberline {17.3.4}Running lots of jobs on a cluster}{317}{subsection.17.3.4}% -\contentsline {subsection}{\numberline {17.3.5}Resources for Harvard's Odyssey}{319}{subsection.17.3.5}% -\contentsline {subsection}{\numberline {17.3.6}Acknowledgements}{320}{subsection.17.3.6}% -\contentsline {chapter}{\numberline {18}Debugging and Testing}{321}{chapter.18}% -\contentsline {section}{\numberline {18.1}Debugging with \texttt {print()}}{321}{section.18.1}% -\contentsline {section}{\numberline {18.2}Debugging with \texttt {browser()}}{322}{section.18.2}% -\contentsline {section}{\numberline {18.3}Debugging with \texttt {debug()}}{323}{section.18.3}% -\contentsline {section}{\numberline {18.4}Protecting functions with \texttt {stop()}}{323}{section.18.4}% -\contentsline {section}{\numberline {18.5}Testing code}{325}{section.18.5}% -\contentsline {part}{V\hspace {1em}Complex Data Structures}{329}{part.5}% -\contentsline {chapter}{\numberline {19}Using simulation as a power calculator}{331}{chapter.19}% -\contentsline {section}{\numberline {19.1}Getting design parameters from pilot data}{332}{section.19.1}% -\contentsline {section}{\numberline {19.2}The data generating process}{333}{section.19.2}% -\contentsline {section}{\numberline {19.3}Running the simulation}{337}{section.19.3}% -\contentsline {section}{\numberline {19.4}Evaluating power}{338}{section.19.4}% -\contentsline {subsection}{\numberline {19.4.1}Checking validity of our models}{338}{subsection.19.4.1}% -\contentsline {subsection}{\numberline {19.4.2}Assessing Precision (SE)}{341}{subsection.19.4.2}% -\contentsline {subsection}{\numberline {19.4.3}Assessing power}{341}{subsection.19.4.3}% -\contentsline {subsection}{\numberline {19.4.4}Assessing Minimum Detectable Effects}{342}{subsection.19.4.4}% -\contentsline {section}{\numberline {19.5}Power for Multilevel Data}{343}{section.19.5}% -\contentsline {chapter}{\numberline {20}Simulation under the Potential Outcomes Framework}{347}{chapter.20}% -\contentsline {section}{\numberline {20.1}Finite vs.~Superpopulation inference}{348}{section.20.1}% -\contentsline {section}{\numberline {20.2}Data generation processes for potential outcomes}{348}{section.20.2}% -\contentsline {section}{\numberline {20.3}Finite sample performance measures}{351}{section.20.3}% -\contentsline {section}{\numberline {20.4}Nested finite simulation procedure}{354}{section.20.4}% -\contentsline {chapter}{\numberline {21}The Parametric bootstrap}{359}{chapter.21}% -\contentsline {section}{\numberline {21.1}Air conditioners: a stolen case study}{360}{section.21.1}% -\contentsline {chapter}{\numberline {A}Coding Reference}{363}{appendix.A}% -\contentsline {section}{\numberline {A.1}How to repeat yourself}{363}{section.A.1}% -\contentsline {subsection}{\numberline {A.1.1}Using \texttt {replicate()}}{363}{subsection.A.1.1}% -\contentsline {subsection}{\numberline {A.1.2}Using \texttt {map()}}{365}{subsection.A.1.2}% -\contentsline {subsection}{\numberline {A.1.3}map with no inputs}{366}{subsection.A.1.3}% -\contentsline {subsection}{\numberline {A.1.4}Other approaches for repetition}{367}{subsection.A.1.4}% -\contentsline {section}{\numberline {A.2}Default arguments for functions}{367}{section.A.2}% -\contentsline {section}{\numberline {A.3}Profiling Code}{369}{section.A.3}% -\contentsline {subsection}{\numberline {A.3.1}Using \texttt {Sys.time()} and \texttt {system.time()}}{369}{subsection.A.3.1}% -\contentsline {subsection}{\numberline {A.3.2}The \texttt {tictoc} package}{370}{subsection.A.3.2}% -\contentsline {subsection}{\numberline {A.3.3}The \texttt {bench} package}{370}{subsection.A.3.3}% -\contentsline {subsection}{\numberline {A.3.4}Profiling with \texttt {profvis}}{373}{subsection.A.3.4}% -\contentsline {section}{\numberline {A.4}Optimizing code (and why you often shouldn't)}{373}{section.A.4}% -\contentsline {subsection}{\numberline {A.4.1}Hand-building functions}{374}{subsection.A.4.1}% -\contentsline {subsection}{\numberline {A.4.2}Computational efficiency versus simplicity}{375}{subsection.A.4.2}% -\contentsline {subsection}{\numberline {A.4.3}Reusing code to speed up computation}{376}{subsection.A.4.3}% -\contentsline {chapter}{\numberline {B}Further readings and resources}{383}{appendix.B}% +\contentsline {subsection}{\numberline {9.2.1}Satterthwaite degrees of freedom}{166}{subsection.9.2.1}% +\contentsline {subsection}{\numberline {9.2.2}Assessing SEs for the Cluster RCT Simulation}{167}{subsection.9.2.2}% +\contentsline {section}{\numberline {9.3}Metrics for Confidence Intervals}{168}{section.9.3}% +\contentsline {subsection}{\numberline {9.3.1}Confidence Intervals in the Cluster RCT Simulation}{169}{subsection.9.3.1}% +\contentsline {section}{\numberline {9.4}Metrics for Inferential Procedures (Hypothesis Tests)}{170}{section.9.4}% +\contentsline {subsection}{\numberline {9.4.1}Validity}{171}{subsection.9.4.1}% +\contentsline {subsection}{\numberline {9.4.2}Power}{172}{subsection.9.4.2}% +\contentsline {subsection}{\numberline {9.4.3}The Rejection Rate}{172}{subsection.9.4.3}% +\contentsline {subsection}{\numberline {9.4.4}Inference in the Cluster RCT Simulation}{173}{subsection.9.4.4}% +\contentsline {section}{\numberline {9.5}Selecting Relative vs.~Absolute Metrics}{175}{section.9.5}% +\contentsline {section}{\numberline {9.6}Estimands Not Represented By a Parameter}{177}{section.9.6}% +\contentsline {section}{\numberline {9.7}Uncertainty in Performance Estimates (the Monte Carlo Standard Error)}{179}{section.9.7}% +\contentsline {subsection}{\numberline {9.7.1}MCSE for Relative Variance Estimators}{181}{subsection.9.7.1}% +\contentsline {subsection}{\numberline {9.7.2}Calculating MCSEs With the \texttt {simhelpers} Package}{182}{subsection.9.7.2}% +\contentsline {subsection}{\numberline {9.7.3}MCSE Calculation in our Cluster RCT Example}{183}{subsection.9.7.3}% +\contentsline {section}{\numberline {9.8}Summary of Peformance Measures}{184}{section.9.8}% +\contentsline {section}{\numberline {9.9}Concluding thoughts}{185}{section.9.9}% +\contentsline {section}{\numberline {9.10}Exercises}{185}{section.9.10}% +\contentsline {subsection}{\numberline {9.10.1}Brown and Forsythe (1974)}{185}{subsection.9.10.1}% +\contentsline {subsection}{\numberline {9.10.2}Better confidence intervals}{185}{subsection.9.10.2}% +\contentsline {subsection}{\numberline {9.10.3}Cluster RCT simulation under a strong null hypothesis}{186}{subsection.9.10.3}% +\contentsline {subsection}{\numberline {9.10.4}Jackknife calculation of MCSEs}{186}{subsection.9.10.4}% +\contentsline {subsection}{\numberline {9.10.5}Distribution theory for person-level average treatment effects}{186}{subsection.9.10.5}% +\contentsline {subsection}{\numberline {9.10.6}Multiple scenarios}{186}{subsection.9.10.6}% +\contentsline {part}{III\hspace {1em}Multifactor Simulations}{189}{part.3}% +\contentsline {chapter}{\numberline {10}Designing and executing multifactor simulations}{191}{chapter.10}% +\contentsline {section}{\numberline {10.1}Choosing parameter combinations}{193}{section.10.1}% +\contentsline {section}{\numberline {10.2}Using pmap to run multifactor simulations}{195}{section.10.2}% +\contentsline {section}{\numberline {10.3}When to calculate performance metrics}{200}{section.10.3}% +\contentsline {subsection}{\numberline {10.3.1}Aggregate as you simulate (inside)}{200}{subsection.10.3.1}% +\contentsline {subsection}{\numberline {10.3.2}Keep all simulation runs (outside)}{200}{subsection.10.3.2}% +\contentsline {subsection}{\numberline {10.3.3}Getting raw results ready for analysis}{202}{subsection.10.3.3}% +\contentsline {section}{\numberline {10.4}Summary}{204}{section.10.4}% +\contentsline {section}{\numberline {10.5}Case Study: A multifactor evaluation of cluster RCT estimators}{205}{section.10.5}% +\contentsline {subsection}{\numberline {10.5.1}Choosing parameters for the Clustered RCT}{205}{subsection.10.5.1}% +\contentsline {subsection}{\numberline {10.5.2}Redundant factor combinations}{207}{subsection.10.5.2}% +\contentsline {subsection}{\numberline {10.5.3}Running the simulations}{207}{subsection.10.5.3}% +\contentsline {subsection}{\numberline {10.5.4}Calculating performance metrics}{208}{subsection.10.5.4}% +\contentsline {section}{\numberline {10.6}Exercises}{210}{section.10.6}% +\contentsline {subsection}{\numberline {10.6.1}Brown and Forsythe redux}{210}{subsection.10.6.1}% +\contentsline {subsection}{\numberline {10.6.2}Meta-regression}{210}{subsection.10.6.2}% +\contentsline {subsection}{\numberline {10.6.3}Comparing the trimmed mean, median and mean}{210}{subsection.10.6.3}% +\contentsline {chapter}{\numberline {11}Exploring and presenting simulation results}{213}{chapter.11}% +\contentsline {section}{\numberline {11.1}Tabulation}{214}{section.11.1}% +\contentsline {subsection}{\numberline {11.1.1}Example: estimators of treatment variation}{216}{subsection.11.1.1}% +\contentsline {section}{\numberline {11.2}Visualization}{217}{section.11.2}% +\contentsline {subsection}{\numberline {11.2.1}Example 0: RMSE in Cluster RCTs}{218}{subsection.11.2.1}% +\contentsline {subsection}{\numberline {11.2.2}Example 1: Biserial correlation estimation}{219}{subsection.11.2.2}% +\contentsline {subsection}{\numberline {11.2.3}Example 2: Variance estimation and Meta-regression}{220}{subsection.11.2.3}% +\contentsline {subsection}{\numberline {11.2.4}Example 3: Heat maps of coverage}{220}{subsection.11.2.4}% +\contentsline {subsection}{\numberline {11.2.5}Example 4: Relative performance of treatment effect estimators}{222}{subsection.11.2.5}% +\contentsline {section}{\numberline {11.3}Modeling}{223}{section.11.3}% +\contentsline {subsection}{\numberline {11.3.1}Example 1: Biserial, revisited}{224}{subsection.11.3.1}% +\contentsline {subsection}{\numberline {11.3.2}Example 2: Comparing methods for cross-classified data}{225}{subsection.11.3.2}% +\contentsline {section}{\numberline {11.4}Reporting}{227}{section.11.4}% +\contentsline {chapter}{\numberline {12}Building good visualizations}{229}{chapter.12}% +\contentsline {section}{\numberline {12.1}Subsetting and Many Small Multiples}{230}{section.12.1}% +\contentsline {section}{\numberline {12.2}Bundling}{233}{section.12.2}% +\contentsline {section}{\numberline {12.3}Aggregation}{236}{section.12.3}% +\contentsline {subsubsection}{\numberline {12.3.0.1}A note on how to aggregate}{238}{subsubsection.12.3.0.1}% +\contentsline {section}{\numberline {12.4}Assessing true SEs}{239}{section.12.4}% +\contentsline {subsubsection}{\numberline {12.4.0.1}Standardizing to compare across simulation scenarios}{241}{subsubsection.12.4.0.1}% +\contentsline {section}{\numberline {12.5}The Bias-SE-RMSE plot}{245}{section.12.5}% +\contentsline {section}{\numberline {12.6}Assessing estimated SEs}{246}{section.12.6}% +\contentsline {section}{\numberline {12.7}Assessing confidence intervals}{249}{section.12.7}% +\contentsline {section}{\numberline {12.8}Exercises}{252}{section.12.8}% +\contentsline {subsection}{\numberline {12.8.1}Assessing uncertainty}{252}{subsection.12.8.1}% +\contentsline {subsection}{\numberline {12.8.2}Assessing power}{252}{subsection.12.8.2}% +\contentsline {subsection}{\numberline {12.8.3}Going deeper with coverage}{252}{subsection.12.8.3}% +\contentsline {subsection}{\numberline {12.8.4}Pearson correlations with a bivariate Poisson distribution}{252}{subsection.12.8.4}% +\contentsline {chapter}{\numberline {13}Special Topics on Reporting Simulation Results}{253}{chapter.13}% +\contentsline {section}{\numberline {13.1}Using regression to analyze simulation results}{253}{section.13.1}% +\contentsline {subsection}{\numberline {13.1.1}Example 1: Biserial, revisited}{253}{subsection.13.1.1}% +\contentsline {subsection}{\numberline {13.1.2}Example 2: Cluster RCT example, revisited}{256}{subsection.13.1.2}% +\contentsline {subsubsection}{\numberline {13.1.2.1}Using LASSO to simplify the model}{258}{subsubsection.13.1.2.1}% +\contentsline {subsubsection}{\numberline {13.1.2.2}Fitting models to each method}{261}{subsubsection.13.1.2.2}% +\contentsline {section}{\numberline {13.2}Using regression trees to find important factors}{265}{section.13.2}% +\contentsline {section}{\numberline {13.3}Analyzing results with few iterations per scenario}{267}{section.13.3}% +\contentsline {subsection}{\numberline {13.3.1}Example: ClusterRCT with only 100 replicates per scenario}{268}{subsection.13.3.1}% +\contentsline {section}{\numberline {13.4}What to do with warnings in simulations}{272}{section.13.4}% +\contentsline {chapter}{\numberline {14}Case study: Comparing different estimators}{277}{chapter.14}% +\contentsline {section}{\numberline {14.1}Bias-variance tradeoffs}{280}{section.14.1}% +\contentsline {chapter}{\numberline {15}Simulations as evidence}{285}{chapter.15}% +\contentsline {section}{\numberline {15.1}Strategies for making relevant simulations}{286}{section.15.1}% +\contentsline {subsection}{\numberline {15.1.1}Break symmetries and regularities}{286}{subsection.15.1.1}% +\contentsline {subsection}{\numberline {15.1.2}Make your simulation general with an extensive multi-factor experiment}{287}{subsection.15.1.2}% +\contentsline {subsection}{\numberline {15.1.3}Use previously published simulations to beat them at their own game}{287}{subsection.15.1.3}% +\contentsline {subsection}{\numberline {15.1.4}Calibrate simulation factors to real data}{287}{subsection.15.1.4}% +\contentsline {subsection}{\numberline {15.1.5}Use real data to obtain directly}{288}{subsection.15.1.5}% +\contentsline {subsection}{\numberline {15.1.6}Fully calibrated simulations}{288}{subsection.15.1.6}% +\contentsline {part}{IV\hspace {1em}Computational Considerations}{291}{part.4}% +\contentsline {chapter}{\numberline {16}Organizing a simulation project}{293}{chapter.16}% +\contentsline {section}{\numberline {16.1}Well structured R scripts}{294}{section.16.1}% +\contentsline {subsection}{\numberline {16.1.1}The source command}{294}{subsection.16.1.1}% +\contentsline {subsection}{\numberline {16.1.2}Putting headers in your .R file}{295}{subsection.16.1.2}% +\contentsline {subsection}{\numberline {16.1.3}Storing testing code in your scripts}{296}{subsection.16.1.3}% +\contentsline {section}{\numberline {16.2}Principled directory structures}{296}{section.16.2}% +\contentsline {section}{\numberline {16.3}Saving simulation results}{297}{section.16.3}% +\contentsline {subsection}{\numberline {16.3.1}Saving simulations in general}{297}{subsection.16.3.1}% +\contentsline {subsection}{\numberline {16.3.2}Saving simulations as you go}{298}{subsection.16.3.2}% +\contentsline {subsection}{\numberline {16.3.3}Dynamically making directories}{301}{subsection.16.3.3}% +\contentsline {subsection}{\numberline {16.3.4}Loading and combining files of simulation results}{302}{subsection.16.3.4}% +\contentsline {chapter}{\numberline {17}Parallel Processing}{303}{chapter.17}% +\contentsline {section}{\numberline {17.1}Parallel on your computer}{304}{section.17.1}% +\contentsline {section}{\numberline {17.2}Parallel on a virtual machine}{305}{section.17.2}% +\contentsline {section}{\numberline {17.3}Parallel on a cluster}{306}{section.17.3}% +\contentsline {subsection}{\numberline {17.3.1}What is a command-line interface?}{306}{subsection.17.3.1}% +\contentsline {subsection}{\numberline {17.3.2}Running a job on a cluster}{308}{subsection.17.3.2}% +\contentsline {subsection}{\numberline {17.3.3}Checking on a job}{310}{subsection.17.3.3}% +\contentsline {subsection}{\numberline {17.3.4}Running lots of jobs on a cluster}{311}{subsection.17.3.4}% +\contentsline {subsection}{\numberline {17.3.5}Resources for Harvard's Odyssey}{313}{subsection.17.3.5}% +\contentsline {subsection}{\numberline {17.3.6}Acknowledgements}{314}{subsection.17.3.6}% +\contentsline {chapter}{\numberline {18}Debugging and Testing}{315}{chapter.18}% +\contentsline {section}{\numberline {18.1}Debugging with \texttt {print()}}{315}{section.18.1}% +\contentsline {section}{\numberline {18.2}Debugging with \texttt {browser()}}{316}{section.18.2}% +\contentsline {section}{\numberline {18.3}Debugging with \texttt {debug()}}{317}{section.18.3}% +\contentsline {section}{\numberline {18.4}Protecting functions with \texttt {stop()}}{317}{section.18.4}% +\contentsline {section}{\numberline {18.5}Testing code}{319}{section.18.5}% +\contentsline {part}{V\hspace {1em}Complex Data Structures}{323}{part.5}% +\contentsline {chapter}{\numberline {19}Using simulation as a power calculator}{325}{chapter.19}% +\contentsline {section}{\numberline {19.1}Getting design parameters from pilot data}{326}{section.19.1}% +\contentsline {section}{\numberline {19.2}The data generating process}{327}{section.19.2}% +\contentsline {section}{\numberline {19.3}Running the simulation}{331}{section.19.3}% +\contentsline {section}{\numberline {19.4}Evaluating power}{332}{section.19.4}% +\contentsline {subsection}{\numberline {19.4.1}Checking validity of our models}{332}{subsection.19.4.1}% +\contentsline {subsection}{\numberline {19.4.2}Assessing Precision (SE)}{335}{subsection.19.4.2}% +\contentsline {subsection}{\numberline {19.4.3}Assessing power}{335}{subsection.19.4.3}% +\contentsline {subsection}{\numberline {19.4.4}Assessing Minimum Detectable Effects}{336}{subsection.19.4.4}% +\contentsline {section}{\numberline {19.5}Power for Multilevel Data}{337}{section.19.5}% +\contentsline {chapter}{\numberline {20}Simulation under the Potential Outcomes Framework}{341}{chapter.20}% +\contentsline {section}{\numberline {20.1}Finite vs.~Superpopulation inference}{342}{section.20.1}% +\contentsline {section}{\numberline {20.2}Data generation processes for potential outcomes}{342}{section.20.2}% +\contentsline {section}{\numberline {20.3}Finite sample performance measures}{345}{section.20.3}% +\contentsline {section}{\numberline {20.4}Nested finite simulation procedure}{348}{section.20.4}% +\contentsline {chapter}{\numberline {21}The Parametric bootstrap}{353}{chapter.21}% +\contentsline {section}{\numberline {21.1}Air conditioners: a stolen case study}{354}{section.21.1}% +\contentsline {chapter}{\numberline {A}Coding Reference}{357}{appendix.A}% +\contentsline {section}{\numberline {A.1}How to repeat yourself}{357}{section.A.1}% +\contentsline {subsection}{\numberline {A.1.1}Using \texttt {replicate()}}{357}{subsection.A.1.1}% +\contentsline {subsection}{\numberline {A.1.2}Using \texttt {map()}}{359}{subsection.A.1.2}% +\contentsline {subsection}{\numberline {A.1.3}map with no inputs}{360}{subsection.A.1.3}% +\contentsline {subsection}{\numberline {A.1.4}Other approaches for repetition}{361}{subsection.A.1.4}% +\contentsline {section}{\numberline {A.2}Default arguments for functions}{361}{section.A.2}% +\contentsline {section}{\numberline {A.3}Profiling Code}{363}{section.A.3}% +\contentsline {subsection}{\numberline {A.3.1}Using \texttt {Sys.time()} and \texttt {system.time()}}{363}{subsection.A.3.1}% +\contentsline {subsection}{\numberline {A.3.2}The \texttt {tictoc} package}{364}{subsection.A.3.2}% +\contentsline {subsection}{\numberline {A.3.3}The \texttt {bench} package}{364}{subsection.A.3.3}% +\contentsline {subsection}{\numberline {A.3.4}Profiling with \texttt {profvis}}{367}{subsection.A.3.4}% +\contentsline {section}{\numberline {A.4}Optimizing code (and why you often shouldn't)}{367}{section.A.4}% +\contentsline {subsection}{\numberline {A.4.1}Hand-building functions}{368}{subsection.A.4.1}% +\contentsline {subsection}{\numberline {A.4.2}Computational efficiency versus simplicity}{369}{subsection.A.4.2}% +\contentsline {subsection}{\numberline {A.4.3}Reusing code to speed up computation}{370}{subsection.A.4.3}% +\contentsline {chapter}{\numberline {B}Further readings and resources}{377}{appendix.B}% diff --git a/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v1-1.pdf b/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v1-1.pdf index 980c42e39e23afa3dc104a79741f3d337a6aafdd..82d64a77a0e5858b165fa607505c8c9a28b85e1d 100644 GIT binary patch delta 177 zcmdmBx4~`$r+}KFfuX5^g@L7^fhL!}Z+?nPVo9okhKrSvfsrXfZnM6?avm-?-*EB+ z8A&AWW@g#*jB%FcmX_v*#;#@t&X#6I#wKPaj%G%#298dyhOUMtrjB+BHUyQ#D%ja^ a6_+Fyl~fd^rg52@nptqEs=E5SaRC7L@hYVN delta 177 zcmdmBx4~`$r+}J)rGcp-5SW;1a_Rf#r?@1Rq$+5*SQ!}@nIhyi>kBOB;ezwcCO?po zMB;8{mOalHXW?pUX6S0}+D{XklpLY^PvDP)V$UogG(k YNn%k+MNw)Rm$|8#1(&L-tG^o;02Wj$2><{9 diff --git a/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v2-1.pdf b/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v2-1.pdf index 829ae74e92d6928866d451f429b213a8036cdd0c..ceb82892d2923040dbdb622ec8ac816000135124 100644 GIT binary patch delta 197 zcmaFl^vG$0wUC;jfuX5^g@L7^ktUbEZ+?nPVo9okhKrSvfsrXfZgZm0avohc-&ohc zRNcTp9gh~{&8G6_8RN`d9nFp0ES;R33@r?e%`J`H%nY3@U7ehqEuEZQEL`jqYzQie dRj{+;DlSPZDyb++P2&QZW5A`V>gw;t1pp+|kh3)zHk;#l_Oxz|`2t)XCY@*}&4xz|zdZ*-pWRppsYx cJ3Fr8lEk8tilWpsE(=3r11?onSARDy00KNPCjbBd diff --git a/Designing-Simulations-in-R_files/figure-latex/disc_mde-1.pdf b/Designing-Simulations-in-R_files/figure-latex/disc_mde-1.pdf index dc6af8690530da3e8f21873115e0bba607e53bf6..4c7ceeb42c97d8327db1edfc612bfa71aa1a93c5 100644 GIT binary patch delta 198 zcmX>cbU0{(w3M2mfuX5^1&}n*cbU0{(w3M2GrGcrTp`o#Xp(dBUZ+?nPVo9okhKrSvfsrXfZnLA*avohc-$>WM zRNcTp9gh~H&7$h(8RMNSEDTLu+)T}!O^lq}ES#KNUCkWL98H~_O-!9EOik<*YzQie fRj{+;DlSPZDyb++P2(~&Ff=jaQdM>JcjE#81H~|8 diff --git a/Designing-Simulations-in-R_files/figure-latex/disc_power-1.pdf b/Designing-Simulations-in-R_files/figure-latex/disc_power-1.pdf index b53c2a530314efc9634926c2d4c822fa997c732a..6435d6076b9aadfdb3e348d6ef936af90193bdf7 100644 GIT binary patch delta 176 zcmbO^gK_2z#tk2B)C>&_O${sz4Gb(ax%7SWQ(O{DQWZ2@tc(ndOc8RM#ch}K=)(Dy zx(24|1_tVQv{-I_?RlQj*}&Pv*xA*<*u}}*)Xm(<&CQObrbUjZ7^yx%7SWQ(O{DQWZ2@tc(ndOc8RM#ch}K=tB9% z1_rtYrs@U;>NvFkm27_Pd7jbP$;Hsb+{MDx%+=M^z{$wi(#gQm(%8t&%mgUrqO}SK6UH#p-0NTwgUjP6A delta 177 zcmezG_up?rgoK)brGcrTp`nqfg(jE2Z+?nPVo9okhKrSvfsrXfZgZW)avm-Of3m)+ zBocRXkm`BHI9E#p7jsJ|ClhB&XA2h-b2Cd*S2F_>GeZ|cCucKrQ#%D4f=Xf)?CiLT YOA?DpDvDCmxGaHYaH*=g`nz!f0OO4;K>z>% diff --git a/Designing-Simulations-in-R_files/figure-latex/swan_example_setup-1.pdf b/Designing-Simulations-in-R_files/figure-latex/swan_example_setup-1.pdf index 55df1b5fb289ae5610f1233875f24ba3ecf269ae..0d4ea993c6249c3452d35cbb28c07ad25933599d 100644 GIT binary patch delta 176 zcmcaPnepai#tj>-)C>&_O${szEKE%_x%7SWQ(O{DQWZ2@tc(ndOc8RMuUjqW(S`F( zbq!3_4Gh%rXffTq+WkDEvx}jzv!%I{nW4Fhk(053xw(_0i=(rnk&}_Bfr*8sk)47K LAtjUddddI*HdHMO delta 176 zcmcaPnepai#tj>-)C?>QObvm+$WoI_-#0(SC9xz`LBqw$$iT=HA-DOu)p8zP7~j}H z*T7WWz(5_h7Gs0WtKH8tI-8q28WZi~SU4J+nVDM}ySTZzIGH(Gn7BH*IN2%K L5K=OEucr(EDQ+z> diff --git a/Designing-Simulations-in-R_files/figure-latex/ttest_result_figure-1.pdf b/Designing-Simulations-in-R_files/figure-latex/ttest_result_figure-1.pdf index ec5098133e1fca721544925722573bd120b7d3a4..76e860d5a450023bcf93d549b40b16d2c41fd94e 100644 GIT binary patch delta 177 zcmZ3ew@`1xFLpIU14B~-3j+%y15GY{-~1Gp#FA764HqjT10z#}+-3!iboJnYdXPI+>Z8x|z8e8=Dy#I@u}M5L6PYU}wiw ZT#{H+Qc;we#${@1V9KSc>gw;t1pvuPEM5Qr delta 177 zcmZ3ew@`1xFLpHpO9N9wLjwyFb4@ON-~1Gp#FA764HqjT10z#}+-3!i9UYxw;vfIvW@mo0+;fo4Q(>n%F7W5L6PYU}wiw ZT#{H+Qc;we#${@1V9KSc>gw;t1px9WEZP77 diff --git a/Designing-Simulations-in-R_files/figure-latex/unnamed-chunk-2-1.pdf b/Designing-Simulations-in-R_files/figure-latex/unnamed-chunk-2-1.pdf index 2bbcce8e48fe2740e885696d9ac05a22fc4dad88..752216c1907134539f7460e71c0f26ba63f66215 100644 GIT binary patch delta 1167 zcmcboc|&u91{0&vL>|}#`&cF=@d;0CE0|Wj`*tgK zmw$DH6URl@rWl!^<_quSj6Ym__jGb{d68jdhT@Kob%mdL_CI@3=d(V}`hER9zk73i z_w;c7e)D|tzqv|B-fRngaNhFFv*Rz5?TibyF|24h zesllZ9M<^@gXG=#yz+j3yeY$|$vlm9!848(f?jb9S$zy2QadN;S*U*4vL^iO(fLvh zmyFgtdyx0QUE)q?HNz9OtNaXG*&S93oUUgG*v+gpWkb39+Jm#B6n_2P5Vd%7WBh9& zUQ>4lRgWzUS7aCtx_`*zpK+G4V9(6rPg&-kvntK+m50yS)PGZXd*yS1M7ayIZ{4t% zCnTcpcr)j2^XB%vV&l1azxMxk)qAk|aD?g;r@Cata8I^QuH_3SY!@qPK9sV}sPxm; z+?%=<_4DRyNAIxaHh28EE=`m7p%wG?vbUOoS?}%yGf!VVV^`IZlzG#u($d5n=6>JV zINOs&OH^ZP@qWo2L8q=Rt=N8Wn#FO2RjQAcW&XT+;3!kaq27aYq+_D*NCY)3`FC&4 zb2AnnJwcPw@1Lu*-sd?^e|&e%{p-Oy9X~!3+Rj;Q6r3+xugdUhjr_IX%9T2Y7royY zTrasIu54Rj+(OgImfP$;%b8xxd}wD{5NYC?$FwEsY)aU_85ZiCPG;^}9V?XoFG|fe zJH=)=T|Q*(#zj1wk>(bfTbuNJUrMaV_A@lC9@XY(U>va_m+#cS7HQnSj4K7 z*~}@b|F@VLD+V`sJnOr%gMHb@D;w4KNA$g2z5Z};z?UMm_;>8f7nJRovuvG%-zB|O z&+Z7W|LDK(&)n0kldkkDue-7J&ViHnsT&)?p(Mf;&O-@kwU zKi(&DCrz$-u=v59PMzq0RUCH@n6@x&^t`H@a#$?k8S@hR2kYifzb!AnLhPMw&cVGg zm-81k@SmC7%4(!$Xkch+U}0ckXrjrb@0*|El30?epy6U=WME{9klTEhbvX|goNqR{ zPe>A(yHV&klZE-@Dq-t-BLxExP{>o@0y7K@O^i%2#7xc1F~rQwj4{P5G1Zw{8lbB) zF*C4)i18sTHZe1`oa`iG7w2MWYVKlSWNzf*X6Wc-?B?oZ>Fi`_Y+-C^Zs6i%W@)Ej hLr_Vqf}I^#aYn7XTEf>M{TT delta 1193 zcmcbic~5hL1{0&jWJ7kP`nBP{`Bxlx_C62)!O=eJW6k-kp^mFG7L_cO6JQVNa!GV7 z`T6JjjY6}TQ@c6>`A;6RexG`5UV6WC0UOKq52ZUUaLZV+3l_8aN6y=OdWQC$y9Y#T z<|R(Nz|J8V%y^2o_V$0D1M0iO&-XoiQdDqbPy4^R%1=%IeHrWc*57YvE^n@H-rP9d z@Pftfp9}KOGbXpbeem=B6ZOxR3oLmjE9az2&VR!vAbac%wc1gzn<=5NH#JGCFl8L{s9Y~!YHKFJ61g`Sz2So~5Rh9%k zE46xGf9^o3U&}$JrA9@Gw%dy)RGj2p9OQpJn0t9*)0B;Fuh-~k+?zG$XzId!G5)@5 zo+%WI_g*e%o~rUX`>yvXfh0e}b!96cP>pU$W>SS{L(4RF}R~>S-|7NR*{EY4f|*1 z<*nEC-g$~oTX{##jvPLT`5ziXW4kz9PlU4-J-y`f*m{!j`IyZr_Jx8oOKS5Jw)Y%) z@-*gWf}**nGgll-z1|aR>75>&nML=c{7v2;`{ut(ZPm3~u4_Gvk1;9d$8Qyn4BLKQ z(|gAbXK%?*Z$9WVZ&&dV-7$AnS!2P~q*(pj<7f6p-1>I)dZM_z|8>%{}(7!*#>7*UfJv0uL^8EO`BV?v6X6M{KVDT+*bb+bI>uvow&ehsS!>*i0-j(N9}e*-Ij z5pO-iHy;6w$@Q#8Y6g}DriO+F7A8iTT>8HGDK3d6sR|k{Rz?O!rU<#s*IAeI=)(EN zx(24|1_tVQv=~obEp(j8(s*)-uywtKf&mC9NVrFKR7-HrI znCi?e&Cu1Em>HTv#P|>vo0yrIPPP%Ti*t4}aWZl?ayB(LGBh@HbFwrsGc|CvFmrS> pHZw7|bh1;hA*du)!Oo7WxFoTtq@pM_jmyN+$c#%>)z#mP3joc^$}Gale{$<$}/em{$>$} Academic OneFile includes The statistical crisis in science: data-dependent analy by Andrew Gelman and Eric Loken. Click to explore.}, author = {Gelman, Andrew and Loken, Eric}, - file = {C:\Users\jamespustejovsky\Zotero\storage\89I3TV76\i.html}, - issn = {00030996}, journal = {American Scientist}, langid = {english}, month = nov, number = {6}, pages = {460--466}, publisher = {Sigma Xi, The Scientific Research Society}, - shorttitle = {The Statistical Crisis in Science}, title = {The Statistical Crisis in Science: Data-Dependent Analysis--a \"Garden of Forking Paths\"--Explains Why Many Statistically Significant Comparisons Don't Hold Up}, - urldate = {2024-01-08}, volume = {102}, year = {2014}} @@ -983,29 +854,21 @@ @article{goldfeld2020SimstudyIlluminatingResearch @article{green2016SIMRPackagePower, abstract = {The r package simr allows users to calculate power for generalized linear mixed models from the lme4 package. The power calculations are based on Monte Carlo simulations. It includes tools for (i) running a power analysis for a given model and design; and (ii) calculating power curves to assess trade-offs between power and sample size. This paper presents a tutorial using a simple example of count data with mixed effects (with structure representative of environmental monitoring data) to guide the user along a gentle learning curve, adding only a few commands or options at a time.}, author = {Green, Peter and MacLeod, Catriona J.}, - copyright = {{\copyright} 2015 The Authors. Methods in Ecology and Evolution {\copyright} 2015 British Ecological Society}, doi = {10.1111/2041-210X.12504}, - file = {C\:\\Users\\jamespustejovsky\\Zotero\\storage\\QDLKYJ6L\\Green and MacLeod - 2016 - SIMR an R package for power analysis of generaliz.pdf;C\:\\Users\\jamespustejovsky\\Zotero\\storage\\HEIG34AA\\2041-210X.html}, - issn = {2041-210X}, journal = {Methods in Ecology and Evolution}, keywords = {cited,experimental design,glmm,Monte Carlo,random effects,sample size,type II error}, langid = {english}, number = {4}, pages = {493--498}, - shorttitle = {{{SIMR}}}, title = {{{SIMR}}: An {{R}} Package for Power Analysis of Generalized Linear Mixed Models by Simulation}, - urldate = {2023-12-31}, volume = {7}, year = {2016}, - bdsk-url-1 = {https://doi.org/10.1111/2041-210X.12504}} +} @article{hardwicke2023ReducingBiasIncreasing, abstract = {Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.}, author = {Hardwicke, Tom E. and Wagenmakers, Eric-Jan}, - copyright = {2022 Springer Nature Limited}, doi = {10.1038/s41562-022-01497-2}, - file = {C:\Users\jamespustejovsky\Zotero\storage\SKQY4R7Q\Hardwicke and Wagenmakers - 2023 - Reducing bias, increasing transparency and calibra.pdf}, - issn = {2397-3374}, journal = {Nature Human Behaviour}, keywords = {Science,Scientific community,technology and society}, langid = {english}, @@ -1014,17 +877,14 @@ @article{hardwicke2023ReducingBiasIncreasing pages = {15--26}, publisher = {Nature Publishing Group}, title = {Reducing Bias, Increasing Transparency and Calibrating Confidence with Preregistration}, - urldate = {2024-01-08}, volume = {7}, year = {2023}, - bdsk-url-1 = {https://doi.org/10.1038/s41562-022-01497-2}} +} @article{harwell2018SurveyReportingPractices, abstract = {Computer simulation studies represent an important tool for investigating processes difficult or impossible to study using mathematical theory or real data. Hoaglin and Andrews recommended these studies be treated as statistical sampling experiments subject to established principles of design and data analysis, but the survey of Hauck and Anderson suggested these recommendations had, at that point in time, generally been ignored. We update the survey results of Hauck and Anderson using a sample of studies applying simulation methods in statistical research to assess the extent to which the recommendations of Hoaglin and Andrews and others for conducting simulation studies have been adopted. The important role of statistical applications of computer simulation studies in enhancing the reproducibility of scientific findings is also discussed. The results speak to the state of the art and the extent to which these studies are realizing their potential to inform statistical practice and a program of statistical research.}, author = {Harwell, Michael and Kohli, Nidhi and {Peralta-Torres}, Yadira}, doi = {10.1080/00031305.2017.1342692}, - file = {C:\Users\jamespustejovsky\Zotero\storage\H9BDZ3WT\Harwell et al. - 2018 - A Survey of Reporting Practices of Computer Simula.pdf}, - issn = {0003-1305}, journal = {The American Statistician}, keywords = {Computer simulation,Design and data analysis,Survey}, month = oct, @@ -1032,35 +892,29 @@ @article{harwell2018SurveyReportingPractices pages = {321--327}, publisher = {Taylor \& Francis}, title = {A {{Survey}} of {{Reporting Practices}} of {{Computer Simulation Studies}} in {{Statistical Research}}}, - urldate = {2024-01-02}, volume = {72}, year = {2018}, - bdsk-url-1 = {https://doi.org/10.1080/00031305.2017.1342692}} +} @article{hoogland1998RobustnessStudiesCovariance, abstract = {In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the conclusions frequently seem to contradict each other. An overview of robustness studies in covariance structure analysis is given, and an attempt is made to generalize findings. Robustness studies are described and distinguished from each other systematically by means of certain characteristics. These characteristics serve as explanatory variables in a meta-analysis concerning the behavior of parameter estimators, standard error estimators, and goodness-of-fit statistics when the model is correctly specified.}, author = {HOOGLAND, JEFFREY J. and BOOMSMA, {\relax ANNE}}, doi = {10.1177/0049124198026003003}, - file = {C:\Users\jamespustejovsky\Zotero\storage\D23SYWER\HOOGLAND and BOOMSMA - 1998 - Robustness Studies in Covariance Structure Modelin.pdf}, - issn = {0049-1241}, journal = {Sociological Methods \& Research}, langid = {english}, month = feb, number = {3}, pages = {329--367}, publisher = {SAGE Publications Inc}, - shorttitle = {Robustness {{Studies}} in {{Covariance Structure Modeling}}}, title = {Robustness {{Studies}} in {{Covariance Structure Modeling}}: {{An Overview}} and a {{Meta-Analysis}}}, - urldate = {2024-01-02}, volume = {26}, year = {1998}, - bdsk-url-1 = {https://doi.org/10.1177/0049124198026003003}} +} @article{huang2016GeneralizedEstimatingEquations, abstract = {Background/aims: Generalized estimating equations are a common modeling approach used in cluster randomized trials to account for within-cluster correlation. It is well known that the sandwich variance estimator is biased when the number of clusters is small ({$\leq$}40), resulting in an inflated type I error rate. Various bias correction methods have been proposed in the statistical literature, but how adequately they are utilized in current practice for cluster randomized trials is not clear. The aim of this study is to evaluate the use of generalized estimating equation bias correction methods in recently published cluster randomized trials and demonstrate the necessity of such methods when the number of clusters is small. Methods: Review of cluster randomized trials published between August 2013 and July 2014 and using generalized estimating equations for their primary analyses. Two independent reviewers collected data from each study using a standardized, pre-piloted data extraction template. A two-arm cluster randomized trial was simulated under various scenarios to show the potential effect of a small number of clusters on type I error rate when estimating the treatment effect. The nominal level was set at 0.05 for the simulation study. Results: Of the 51 included trials, 28 (54.9\%) analyzed 40 or fewer clusters with a minimum of four total clusters. Of these 28 trials, only one trial used a bias correction method for generalized estimating equations. The simulation study showed that with four clusters, the type I error rate ranged between 0.43 and 0.47. Even though type I error rate moved closer to the nominal level as the number of clusters increases, it still ranged between 0.06 and 0.07 with 40 clusters. Conclusions: Our results showed that statistical issues arising from small number of clusters in generalized estimating equations is currently inadequately handled in cluster randomized trials. Potential for type I error inflation could be very high when the sandwich estimator is used without bias correction.}, author = {Huang, Shuang and Fiero, Mallorie H and Bell, Melanie L}, doi = {10.1177/1740774516643498}, - issn = {1740-7745, 1740-7753}, journal = {Clinical Trials}, langid = {english}, month = aug, @@ -1068,42 +922,35 @@ @article{huang2016GeneralizedEstimatingEquations pages = {445--449}, shorttitle = {Generalized Estimating Equations in Cluster Randomized Trials with a Small Number of Clusters}, title = {Generalized Estimating Equations in Cluster Randomized Trials with a Small Number of Clusters: {{Review}} of Practice and Simulation Study}, - urldate = {2024-01-05}, volume = {13}, year = {2016}, - bdsk-url-1 = {https://doi.org/10.1177/1740774516643498}} +} @article{hussey2007DesignAnalysisStepped, abstract = {Cluster randomized trials (CRT) are often used to evaluate therapies or interventions in situations where individual randomization is not possible or not desirable for logistic, financial or ethical reasons. While a significant and rapidly growing body of literature exists on CRTs utilizing a ``parallel'' design (i.e. I clusters randomized to each treatment), only a few examples of CRTs using crossover designs have been described. In this article we discuss the design and analysis of a particular type of crossover CRT -- the stepped wedge -- and provide an example of its use.}, author = {Hussey, Michael A. and Hughes, James P.}, doi = {10.1016/j.cct.2006.05.007}, - file = {C:\Users\jamespustejovsky\Zotero\storage\ZX4WVRHJ\S1551714406000632.html}, - issn = {1551-7144}, journal = {Contemporary Clinical Trials}, keywords = {Cluster randomized trial,Prevention trials,Stepped wedge design}, month = feb, number = {2}, pages = {182--191}, title = {Design and Analysis of Stepped Wedge Cluster Randomized Trials}, - urldate = {2024-01-05}, volume = {28}, year = {2007}, - bdsk-url-1 = {https://doi.org/10.1016/j.cct.2006.05.007}} +} @book{jones2012IntroductionScientificProgramming, - abstract = {Known for its versatility, the free programming language R is widely used for statistical computing and graphics, but is also a fully functional programming language well suited to scientific programming.An Introduction to Scientific Programming and Simulation Using R teaches the skills needed to perform scientific programming while also introducin}, address = {New York}, author = {Jones, Owen and Maillardet, Robert and Robinson, Andrew}, doi = {10.1201/9781420068740}, - isbn = {978-0-429-14333-5}, - month = oct, publisher = {{Chapman and Hall/CRC}}, title = {Introduction to {{Scientific Programming}} and {{Simulation Using R}}}, year = {2012}, - bdsk-url-1 = {https://doi.org/10.1201/9781420068740}} +} @misc{joshi2022SimhelpersHelperFunctions, - author = {Joshi, Megha and Pustejovsky, James}, + author = {Joshi, Megha and Pustejovsky, James E.}, keywords = {cited}, title = {Simhelpers: {{Helper Functions}} for {{Simulation Studies}}}, year = {2022}} @@ -1130,39 +977,30 @@ @article{kern2016AssessingMethodsGeneralizing abstract = {Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research, increasing attention is being paid to the potential lack of generalizability of randomized experiments because the experimental participants may be unrepresentative of the target population of interest. This article examines whether generalization may be assisted by statistical methods that adjust for observed differences between the experimental participants and members of a target population. The methods examined include approaches that reweight the experimental data so that participants more closely resemble the target population and methods that utilize models of the outcome. Two simulation studies and one empirical analysis investigate and compare the methods' performance. One simulation uses purely simulated data while the other utilizes data from an evaluation of a school-based dropout prevention program. Our simulations suggest that machine learning methods outperform regression-based methods when the required structural (ignorability) assumptions are satisfied. When these assumptions are violated, all of the methods examined perform poorly. Our empirical analysis uses data from a multisite experiment to assess how well results from a given site predict impacts in other sites. Using a variety of extrapolation methods, predicted effects for each site are compared to actual benchmarks. Flexible modeling approaches perform best, although linear regression is not far behind. Taken together, these results suggest that flexible modeling techniques can aid generalization while underscoring the fact that even state-of-the-art statistical techniques still rely on strong assumptions.}, author = {Kern, Holger L. and Stuart, Elizabeth A. and Hill, Jennifer and Green, Donald P.}, doi = {10.1080/19345747.2015.1060282}, - file = {C:\Users\jamespustejovsky\Zotero\storage\2F7WKUXW\Kern et al. - 2016 - Assessing Methods for Generalizing Experimental Im.pdf}, - issn = {1934-5747}, journal = {Journal of Research on Educational Effectiveness}, keywords = {Bayesian Additive Regression Trees external validity generalizability propensity score weighting}, month = jan, number = {1}, pages = {103--127}, - pmid = {27668031}, - publisher = {Routledge}, title = {Assessing {{Methods}} for {{Generalizing Experimental Impact Estimates}} to {{Target Populations}}}, - urldate = {2024-01-01}, volume = {9}, year = {2016}, - bdsk-url-1 = {https://doi.org/10.1080/19345747.2015.1060282}} +} @article{koehler2009AssessmentMonteCarlo, abstract = {Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95\% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.}, author = {Koehler, Elizabeth and Brown, Elizabeth and Haneuse, Sebastien J.-P. A.}, doi = {10.1198/tast.2009.0030}, - file = {C:\Users\jamespustejovsky\Zotero\storage\BZFE3YSE\Koehler et al. - 2009 - On the Assessment of Monte Carlo Error in Simulati.pdf}, - issn = {0003-1305}, journal = {The American Statistician}, keywords = {Bootstrap,cited,Jackknife,Replication}, month = may, number = {2}, pages = {155--162}, - pmid = {22544972}, publisher = {Taylor \& Francis}, title = {On the {{Assessment}} of {{Monte Carlo Error}} in {{Simulation-Based Statistical Analyses}}}, - urldate = {2024-01-02}, volume = {63}, year = {2009}, - bdsk-url-1 = {https://doi.org/10.1198/tast.2009.0030}} +} @misc{leschinski2019MonteCarloAutomaticParallelized, author = {Leschinski, Christian Hendrik}, @@ -1174,48 +1012,37 @@ @misc{leschinski2019MonteCarloAutomaticParallelized @article{leyrat2013PropensityScoresUsed, abstract = {Cluster randomized trials (CRTs) are often prone to selection bias despite randomization. Using a simulation study, we investigated the use of propensity score (PS) based methods in estimating treatment effects in CRTs with selection bias when the outcome is quantitative. Of four PS-based methods (adjustment on PS, inverse weighting, stratification, and optimal full matching method), three successfully corrected the bias, as did an approach using classical multivariable regression. However, they showed poorer statistical efficiency than classical methods, with higher standard error for the treatment effect, and type I error much smaller than the 5\% nominal level. Copyright {\copyright} 2013 John Wiley \& Sons, Ltd.}, author = {Leyrat, C. and Caille, A. and Donner, A. and Giraudeau, B.}, - copyright = {Copyright {\copyright} 2013 John Wiley \& Sons, Ltd.}, doi = {10.1002/sim.5795}, - file = {C\:\\Users\\jamespustejovsky\\Zotero\\storage\\VHHHRSDD\\Leyrat et al. - 2013 - Propensity scores used for analysis of cluster ran.pdf;C\:\\Users\\jamespustejovsky\\Zotero\\storage\\FU2REDBN\\sim.html}, - issn = {1097-0258}, journal = {Statistics in Medicine}, keywords = {cluster randomized trial,Monte-Carlo simulations,propensity score,selection bias}, langid = {english}, number = {19}, pages = {3357--3372}, - shorttitle = {Propensity Scores Used for Analysis of Cluster Randomized Trials with Selection Bias}, title = {Propensity Scores Used for Analysis of Cluster Randomized Trials with Selection Bias: A Simulation Study}, - urldate = {2024-01-05}, volume = {32}, year = {2013}, - bdsk-url-1 = {https://doi.org/10.1002/sim.5795}} +} @article{lohmann2022ItTimeTen, abstract = {The quantitative analysis of research data is a core element of empirical research. The performance of statistical methods that are used for analyzing empirical data can be evaluated and compared using computer simulations. A single simulation study can influence the analyses of thousands of empirical studies to follow. With great power comes great responsibility. Here, we argue that this responsibility includes replication of simulation studies to ensure a sound foundation for data analytical decisions. Furthermore, being designed, run, and reported by humans, simulation studies face challenges similar to other experimental empirical research and hence should not be exempt from replication attempts. We highlight that the potential replicability of simulation studies is an opportunity quantitative methodology as a field should pay more attention to.}, author = {Lohmann, Anna and Astivia, Oscar L. O. and Morris, Tim P. and Groenwold, Rolf H. H.}, - file = {C:\Users\jamespustejovsky\Zotero\storage\GBRU4F33\Lohmann et al. - 2022 - It's time! Ten reasons to start replicating simula.pdf}, - issn = {2674-1199}, journal = {Frontiers in Epidemiology}, title = {It's Time! {{Ten}} Reasons to Start Replicating Simulation Studies}, - urldate = {2024-01-01}, volume = {2}, year = {2022}} @article{miratrix2021applied, author = {Miratrix, Luke W. and Weiss, Michael J. and Henderson, Brit}, doi = {10.1080/19345747.2020.1831115}, - issn = {1934-5747, 1934-5739}, journal = {Journal of Research on Educational Effectiveness}, langid = {english}, month = jan, number = {1}, pages = {270--308}, - shorttitle = {An {{Applied Researcher}}'s {{Guide}} to {{Estimating Effects}} from {{Multisite Individually Randomized Trials}}}, title = {An {{Applied Researcher}}'s {{Guide}} to {{Estimating Effects}} from {{Multisite Individually Randomized Trials}}: {{Estimands}}, {{Estimators}}, and {{Estimates}}}, - urldate = {2024-01-05}, volume = {14}, year = {2021}, - bdsk-url-1 = {https://doi.org/10.1080/19345747.2020.1831115}} +} @book{miratrix2023DesigningMonteCarlo, author = {Miratrix, Luke W. and Pustejovsky, Jame E.}, @@ -1227,21 +1054,17 @@ @book{miratrix2023DesigningMonteCarlo @article{moerbeek2019WhatAreStatistical, abstract = {Subjects in randomized controlled trials do not always comply to the treatment condition they have been assigned to. This may cause the estimated effect of the intervention to be biased and also affect efficiency, coverage of confidence intervals, and statistical power. In cluster randomized trials non-compliance may occur at the subject level but also at the cluster level. In the latter case, all subjects within the same cluster have the same compliance status. The purpose of this study is to investigate the statistical implications of non-compliance in cluster randomized trials. A simulation study was conducted with varying degrees of non-compliance at either the cluster level or subject level. The probability of non-compliance depends on a covariate at the cluster or subject level. Various realistic values of the intraclass correlation coefficient and cluster size are used. The data are analyzed by intention to treat, as treated, per protocol and the instrumental variable approach. The results show non-compliance may result in downward biased estimates of the intervention effect and an under- or overestimate of its standard deviation. The coverage of the confidence intervals may be too small, and in most cases, empirical power is too small. The results are more severe when the probability of non-compliance increases and the covariate that affects compliance is unobserved. It is advocated to avoid non-compliance. If this is not possible, compliance status and covariates that affect compliance should be measured and included in the statistical model.}, author = {Moerbeek, Mirjam and van Schie, Sander}, - copyright = {{\copyright} 2019 The Authors. Statistics~in~Medicine Published by John Wiley \& Sons Ltd.}, doi = {10.1002/sim.8351}, - file = {C\:\\Users\\jamespustejovsky\\Zotero\\storage\\3D57RHNZ\\Moerbeek and Schie - 2019 - What are the statistical implications of treatment.pdf;C\:\\Users\\jamespustejovsky\\Zotero\\storage\\5VFMRPKP\\sim.html}, - issn = {1097-0258}, journal = {Statistics in Medicine}, keywords = {cluster randomized trial,simulation study,treatment non-compliance}, langid = {english}, number = {26}, pages = {5071--5084}, - shorttitle = {What Are the Statistical Implications of Treatment Non-Compliance in Cluster Randomized Trials}, title = {What Are the Statistical Implications of Treatment Non-Compliance in Cluster Randomized Trials: {{A}} Simulation Study}, urldate = {2024-01-05}, volume = {38}, year = {2019}, - bdsk-url-1 = {https://doi.org/10.1002/sim.8351}} +} @book{mooney1997MonteCarloSimulation, author = {Mooney, Christopher Z}, @@ -1254,46 +1077,36 @@ @book{mooney1997MonteCarloSimulation @article{morris2019UsingSimulationStudies, author = {Morris, Tim P. and White, Ian R. and Crowther, Michael J.}, doi = {10.1002/sim.8086}, - file = {C:\Users\jamespustejovsky\Zotero\storage\VNK7VV22\Morris et al. - 2019 - Using simulation studies to evaluate statistical m.pdf}, - issn = {02776715}, journal = {Statistics in Medicine}, keywords = {cited}, langid = {english}, month = jan, - shorttitle = {Using Simulation Studies to Evaluate Statistical Methods}, title = {Using Simulation Studies to Evaluate Statistical Methods}, urldate = {2019-01-26}, year = {2019}, - bdsk-url-1 = {https://doi.org/10.1002/sim.8086}} +} @misc{nguyen2022MpowerPackagePower, abstract = {Estimating sample size and statistical power is an essential part of a good study design. This R package allows users to conduct power analysis based on Monte Carlo simulations in settings in which consideration of the correlations between predictors is important. It runs power analyses given a data generative model and an inference model. It can set up a data generative model that preserves dependence structures among variables given existing data (continuous, binary, or ordinal) or high-level descriptions of the associations. Users can generate power curves to assess the trade-offs between sample size, effect size, and power of a design. This paper presents tutorials and examples focusing on applications for environmental mixture studies when predictors tend to be moderately to highly correlated. It easily interfaces with several existing and newly developed analysis strategies for assessing associations between exposures and health outcomes. However, the package is sufficiently general to facilitate power simulations in a wide variety of settings.}, author = {Nguyen, Phuc H. and Engel, Stephanie M. and Herring, Amy H.}, - file = {C:\Users\jamespustejovsky\Zotero\storage\ZV6P78KM\Nguyen et al. - 2022 - mpower An R Package for Power Analysis via Simula.pdf}, howpublished = {https://arxiv.org/abs/2209.08036v1}, journal = {arXiv.org}, langid = {english}, month = sep, shorttitle = {Mpower}, title = {Mpower: {{An R Package}} for {{Power Analysis}} via {{Simulation}} for {{Correlated Data}}}, - urldate = {2024-01-01}, year = {2022}} @article{orcan2021MonteCarloSEMPackageSimulate, abstract = {Monte Carlo simulation is a useful tool for researchers to estimated accuracy of a statistical model. It is usually used for investigating parameter estimation procedure or violation of assumption for some given conditions. To run a simulation either the paid software or open source but free program such as R is need to be used. For that, researchers must have a good knowledge about the theoretical procedures. This paper introduces the R package called MonteCarloSEM. The package helps to simulate and analyze data sets for some simulation condition such as sample size and normality for a given model. Also, an example is given to show how the functions within the package works.}, author = {Or{\c c}an, Fatih}, - file = {C:\Users\jamespustejovsky\Zotero\storage\DRYK6I84\Or{\c c}an - 2021 - MonteCarloSEM An R Package to Simulate Data for S.pdf}, - issn = {2148-7456}, journal = {International Journal of Assessment Tools in Education}, keywords = {cited}, langid = {english}, month = sep, number = {3}, pages = {704--713}, - publisher = {{\.I}zzet KARA}, - shorttitle = {{{MonteCarloSEM}}}, title = {{{MonteCarloSEM}}: {{An R Package}} to {{Simulate Data}} for {{SEM}}}, - urldate = {2024-01-02}, volume = {8}, year = {2021}} @@ -1301,33 +1114,28 @@ @article{paxton2001MonteCarloExperiments abstract = {The use of Monte Carlo simulations for the empirical assessment of statistical estimators is becoming more common in structural equation modeling research. Yet, there is little guidance for the researcher interested in using the technique. In this article we illustrate both the design and implementation of Monte Carlo simulations. We present 9 steps in planning and performing a Monte Carlo analysis: (1) developing a theoretically derived research question of interest, (2) creating a valid model, (3) designing specific experimental conditions, (4) choosing values of population parameters, (5) choosing an appropriate software package, (6) executing the simulations, (7) file storage, (8) troubleshooting and verification, and (9) summarizing results. Throughout the article, we use as a running example a Monte Carlo simulation that we performed to illustrate many of the relevant points with concrete information and detail.}, author = {Paxton, Pamela and Curran, Patrick J. and Bollen, Kenneth A. and Kirby, Jim and Chen, Feinian}, doi = {10.1207/S15328007SEM0802_7}, - issn = {1070-5511}, journal = {Structural Equation Modeling: A Multidisciplinary Journal}, keywords = {cited}, month = apr, number = {2}, pages = {287--312}, publisher = {Routledge}, - shorttitle = {Monte {{Carlo Experiments}}}, title = {Monte {{Carlo Experiments}}: {{Design}} and {{Implementation}}}, - urldate = {2024-01-02}, volume = {8}, year = {2001}, - bdsk-url-1 = {https://doi.org/10.1207/S15328007SEM0802_7}} +} @book{robert2010IntroducingMonteCarlo, address = {New York, NY}, author = {Robert, Christian and Casella, George}, doi = {10.1007/978-1-4419-1576-4}, - file = {C:\Users\jamespustejovsky\Zotero\storage\RX3A54TU\Robert and Casella - 2010 - Introducing Monte Carlo Methods with R.pdf}, isbn = {978-1-4419-1582-5 978-1-4419-1576-4}, keywords = {bayesian statistics,Markov chain,Mathematica,Monte Carlo,Monte Carlo method,Random variable,simulation,STATISTICA}, langid = {english}, publisher = {Springer}, title = {Introducing {{Monte Carlo Methods}} with {{R}}}, - urldate = {2024-01-02}, year = {2010}, - bdsk-url-1 = {https://doi.org/10.1007/978-1-4419-1576-4}} +} @misc{scheer2020SimToolConductSimulation, author = {Scheer, Marcel}, @@ -1340,63 +1148,48 @@ @article{siepe2024SimulationStudiesMethodological abstract = {Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In order to assess the quality of typical simulation studies in psychology, we reviewed 321 articles published in Psychological Methods, Behavioral Research Methods, and Multivariate Behavioral Research in 2021 and 2022, among which 100/321 = 31.2\% report a simulation study. We find that many articles do not provide complete and transparent information about key aspects of the study, such as justifications for the number of simulation repetitions, Monte Carlo uncertainty estimates, or code and data to reproduce the simulation studies. To address this problem, we provide a summary of the ADEMP (Aims, Data-generating mechanism, Estimands and other targets, Methods, Performance measures) design and reporting framework from Morris, White, and Crowther (2019) adapted to simulation studies in psychology. Based on this framework, we provide ADEMP-PreReg, a step-by-step template for researchers to use when designing, potentially preregistering, and reporting their simulation studies. We give formulae for estimating common performance measures, their Monte Carlo standard errors, and for calculating the number of simulation repetitions to achieve a desired Monte Carlo standard error. Finally, we give a detailed tutorial on how to apply the ADEMP framework in practice using an example simulation study on the evaluation of methods for the analysis of pre--post measurement experiments.}, author = {Siepe, Bj{\"o}rn S. and Barto{\v s}, Franti{\v s}ek and Morris, Tim and Boulesteix, Anne-Laure and Heck, Daniel W. and Pawel, Samuel}, doi = {10.31234/osf.io/ufgy6}, - file = {C\:\\Users\\jamespustejovsky\\Zotero\\storage\\3WUS7RGS\\Siepe et al. - 2024 - Simulation Studies for Methodological Research in .pdf;C\:\\Users\\jamespustejovsky\\Zotero\\storage\\ZA4S3LP9\\ufgy6.html}, keywords = {cited}, langid = {american}, month = jan, - publisher = {OSF}, - shorttitle = {Simulation {{Studies}} for {{Methodological Research}} in {{Psychology}}}, title = {Simulation Studies for Methodological Research in Psychology: A Standardized Template for Planning, Preregistration, and Reporting}, - urldate = {2024-01-01}, year = {2024}, - bdsk-url-1 = {https://doi.org/10.31234/osf.io/ufgy6}} +} @article{sigal2016PlayItAgain, abstract = {Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep learning curve when organizing simulation code. A primary goal of this article is to demonstrate how well-suited MCS designs are to classroom demonstrations, and how they provide a hands-on method for students to become acquainted with complex statistical concepts. In this article, essential programming aspects for writing MCS code in R are overviewed, multiple applied examples with relevant code are provided, and the benefits of using a generate--analyze--summarize coding structure over the typical ``for-loop'' strategy are discussed.}, author = {Sigal, Matthew J. and Chalmers, R. Philip}, doi = {10.1080/10691898.2016.1246953}, - file = {C:\Users\jamespustejovsky\Zotero\storage\VJFLBCD7\Sigal and Chalmers - 2016 - Play It Again Teaching Statistics With Monte Carl.pdf}, - issn = {null}, journal = {Journal of Statistics Education}, keywords = {Active learning,R,Simulation,Statistical computing}, month = sep, number = {3}, pages = {136--156}, publisher = {Taylor \& Francis}, - shorttitle = {Play {{It Again}}}, title = {Play {{It Again}}: {{Teaching Statistics With Monte Carlo Simulation}}}, - urldate = {2024-01-01}, volume = {24}, year = {2016}, - bdsk-url-1 = {https://doi.org/10.1080/10691898.2016.1246953}} +} @article{skrondal2000DesignAnalysisMonte, abstract = {The design and analysis of Monte Carlo experiments, with special reference to structural equation modelling, is discussed in this article. These topics merit consideration, since the validity of the conclusions drawn from a Monte Carlo study clearly hinges on these features. It is argued that comprehensive Monte Carlo experiments can be implemented on a PC if the experiments are adequately designed. This is especially important when investigating modern computer intensive methodologies like resampling and Markov Chain Monte Carlo methods. We are faced with three fundamental challenges in Monte Carlo experimentation. The first problem is statistical precision, which concerns the reliability of the obtained results. External validity, on the other hand, depends on the number of experimental conditions, and is crucial for the prospects of generalising the results beyond the specific experiment. Finally, we face the constraint on available computer resources. The conventional wisdom in designing and analysing Monte Carlo experiments embodies no explicit specification of meta-model for analysing the output of the experiment, the use of case studies or full factorial designs as experimental plans, no use of variance reduction techniques, a large number of replications, and "eyeballing" of the results. A critical examination of the conventional wisdom is presented in this article. We suggest that the following alternative procedures should be considered. First of all, we argue that it is profitable to specify explicit meta-models, relating the chosen performance statistics and experimental conditions. Regarding the experimental plan, we recommend the use of incomplete designs, which will often result in considerable savings. We also consider the use of common random numbers in the simulation phase, since this may enhance the precision in estimating meta-models. The use of fewer replications per trial, enabling us to investigate an increased number of experimental conditions, should also be considered in order to improve the external validity at the cost of the conventionally excessive precision.}, author = {Skrondal, Anders}, doi = {10.1207/S15327906MBR3502_1}, - issn = {0027-3171}, journal = {Multivariate Behavioral Research}, keywords = {cited}, month = apr, number = {2}, pages = {137--167}, - pmid = {26754081}, - publisher = {Routledge}, - shorttitle = {Design and {{Analysis}} of {{Monte Carlo Experiments}}}, title = {Design and {{Analysis}} of {{Monte Carlo Experiments}}: {{Attacking}} the {{Conventional Wisdom}}}, - urldate = {2024-01-02}, volume = {35}, year = {2000}, - bdsk-url-1 = {https://doi.org/10.1207/S15327906MBR3502_1}} +} @article{smith1973MonteCarloMethods, author = {Smith, Vincent Kerry}, - file = {C:\Users\jamespustejovsky\Zotero\storage\WC9SPKTS\1130000796834682624.html}, journal = {(No Title)}, langid = {english}, shorttitle = {Monte {{Carlo}} Methods}, title = {Monte {{Carlo}} Methods : {{Their Role}} for {{Econometrics}}}, - urldate = {2024-01-02}, year = {1973}} @article{sofrygin2017SimcausalPackageConducting, @@ -1409,12 +1202,10 @@ @article{sofrygin2017SimcausalPackageConducting title = {Simcausal {{R Package}}: {{Conducting Transparent}} and {{Reproducible Simulation Studies}} of {{Causal Effect Estimation}} with {{Complex Longitudinal Data}}}, volume = {81}, year = {2017}, - bdsk-url-1 = {https://doi.org/10.18637/jss.v081.i02}} +} @article{vevea1995general, author = {Vevea, Jack L and Hedges, Larry V}, - date = {1995-09-01}, - date-modified = {2025-06-19 10:38:20 -0700}, doi = {10.1007/BF02294384}, journaltitle = {Psychometrika}, number = {3}, @@ -1423,19 +1214,16 @@ @article{vevea1995general title = {A general linear model for estimating effect size in the presence of publication bias}, volume = {60}, year = {1995}, - bdsk-url-1 = {https://doi.org/10.1007/BF02294384}} +} @article{white2023HowCheckSimulation, abstract = {Simulation studies are powerful tools in epidemiology and biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check. Simulation studies should be designed to include some settings in which answers are already known. They should be coded in stages, with data-generating mechanisms checked before simulated data are analysed. Results should be explored carefully, with scatterplots of standard error estimates against point estimates surprisingly powerful tools. Failed estimation and outlying estimates should be identified and dealt with by changing data-generating mechanisms or coding realistic hybrid analysis procedures. Finally, we give a series of ideas that have been useful to us in the past for checking unexpected results. Following our advice may help to prevent errors and to improve the quality of published simulation studies.}, author = {White, Ian R and Pham, Tra My and Quartagno, Matteo and Morris, Tim P}, doi = {10.1093/ije/dyad134}, - file = {C\:\\Users\\jamespustejovsky\\Zotero\\storage\\MI2FJWW8\\White et al. - 2023 - How to check a simulation study.pdf;C\:\\Users\\jamespustejovsky\\Zotero\\storage\\PP2MIHSI\\7313663.html}, - issn = {0300-5771}, journal = {International Journal of Epidemiology}, keywords = {cited}, month = oct, pages = {dyad134}, title = {How to Check a Simulation Study}, - urldate = {2024-01-01}, year = {2023}, - bdsk-url-1 = {https://doi.org/10.1093/ije/dyad134}} +} diff --git a/sec b/sec new file mode 100644 index 0000000..e69de29 From 4e5d2a439cde6414e4d580b5d0ebfc5120cceece Mon Sep 17 00:00:00 2001 From: jepusto Date: Mon, 20 Oct 2025 11:57:14 -0500 Subject: [PATCH 04/10] Fixed typos in three-parameter item response model. --- 020-Data-generating-models.Rmd | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/020-Data-generating-models.Rmd b/020-Data-generating-models.Rmd index 68178ac..dcf98ee 100644 --- a/020-Data-generating-models.Rmd +++ b/020-Data-generating-models.Rmd @@ -585,16 +585,16 @@ For a particular fixed-length test, the set of item parameters would depend on t But we are not (yet) dealing with actual testing data, so we will need to make up an auxiliary model for these parameters. Perhaps we could just simulate some values? Arbitrarily, let's draw the difficulty parameters from a normal distribution with mean $\mu_\alpha = 0$ and standard deviation $\tau_\alpha = 1$. -The discrimination parameters have to be greater than zero, and values near $\beta_m = 1$ make the model simplify (in other words, if $\beta_1 = 1$ then we can drop the parameter from the model), so let's draw them from a gamma distribution with mean $\mu_\beta = 1$ and standard deviation $\tau_\beta = 0.2$. +The discrimination parameters have to be greater than zero, and values near $\alpha_m = 1$ make the model simplify (in other words, if $\alpha_1 = 1$ then we can drop the parameter from the model), so let's draw them from a gamma distribution with mean $\mu_\alpha = 1$ and standard deviation $\tau_\alpha = 0.2$. This decision requires a bit of work: gamma distributions are usually parameterized in terms of shape and rate, not mean and standard deviation. A bit of poking on Wikipedia gives us the answer, however: -shape is equal to $\mu_\beta^2 \tau_\beta^2 = 0.2^2$ and rate is equal to $\mu_\beta \tau_\beta^2 = 0.2^2$. +shape is equal to $\mu_\alpha^2 \tau_\alpha^2 = 0.2^2$ and rate is equal to $\mu_\alpha \tau_\alpha^2 = 0.2^2$. Finally, we imagine that all the test questions have four possible responses, and therefore set $\gamma_m = \frac{1}{4}$ for all the items, just like the instructor suggested. Each item requires three numbers; the easiest way to generate them is to let them all be independent of each other, so we do that. With that, let's make up some item parameters: ```{r} -alphas <- rnorm(M, mean = 0, sd = 1.5) # difficulty parameters -betas <- rgamma(M, shape = 0.2^2, rate = 0.2^2) # discrimination parameters +betas <- rnorm(M, mean = 0, sd = 1.5) # difficulty parameters +alphas <- rgamma(M, shape = 0.2^2, rate = 0.2^2) # discrimination parameters gammas <- rep(1 / 4, M) # guessing parameters ``` @@ -638,10 +638,13 @@ r_3PL_IRT <- function( thetas <- rnorm(N) # generate item parameters - alphas <- rnorm(M, mean = diff_M, sd = diff_SD) - betas <- rgamma(M, - shape = disc_M^2 * disc_SD^2, - rate = disc_M * disc_SD^2) + + alphas <- rgamma( + M, + shape = disc_M^2 * disc_SD^2, + rate = disc_M * disc_SD^2 + ) + betas <- rnorm(M, mean = diff_M, sd = diff_SD) gammas <- rep(1 / item_options, M) # simulate item responses From 66570654a04bfdee17cae7165541d125ceb189ba Mon Sep 17 00:00:00 2001 From: jepusto Date: Mon, 20 Oct 2025 12:27:30 -0500 Subject: [PATCH 05/10] Specify tinytex installer in github actions workflow. --- .github/workflows/deploy_bookdown.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/deploy_bookdown.yml b/.github/workflows/deploy_bookdown.yml index 093c492..1c415e5 100644 --- a/.github/workflows/deploy_bookdown.yml +++ b/.github/workflows/deploy_bookdown.yml @@ -22,6 +22,8 @@ jobs: run: Rscript -e 'bookdown::render_book("index.Rmd", "bookdown::gitbook")' - name: Set up tinytex uses: r-lib/actions/setup-tinytex@v2 + env: + TINYTEX_INSTALLER: TinyTeX - name: Check latex installation run: tlmgr --version - name: Render pdf book From ae3ed80edd11019414163f5640146bfc5c8fc757 Mon Sep 17 00:00:00 2001 From: jepusto Date: Fri, 31 Oct 2025 10:07:12 -0500 Subject: [PATCH 06/10] Shrinkage for the cluster RCT multifactor simulation. --- code/meta_analysis_playing.R | 194 +++++++++++++++++++++++++++++++++++ 1 file changed, 194 insertions(+) create mode 100644 code/meta_analysis_playing.R diff --git a/code/meta_analysis_playing.R b/code/meta_analysis_playing.R new file mode 100644 index 0000000..579c987 --- /dev/null +++ b/code/meta_analysis_playing.R @@ -0,0 +1,194 @@ + + +# Looking at how to viz simulation results and deal with +# heteroskedasticity in the MCSEs. + +# E.g., by the "MCSE Funnel Plot" and by using meta analysis to get +# shrunk estimates of performance + +library( tidyverse ) + +#### Load the ClusterRCT sim results and calc performance metrics #### + +source( here::here( "case_study_code/clustered_data_simulation.R" ) ) +source( here::here( "case_study_code/cronbach_alpha_simulation.R" ) ) + +res <- readRDS( file = here::here( "results/simulation_CRT.rds" ) ) +res + + +# Cut down to 100 reps to make MCSE even larger (optional) +res <- res %>% + filter( as.numeric(runID) <= 100 ) +res + + +library( simhelpers ) +sres <- + res %>% + group_by( + n_bar, J, ATE, size_coef, ICC, alpha, method + ) %>% + summarise( + calc_absolute( estimates = ATE_hat, true_param = ATE, + criteria = c("bias","stddev", "rmse")), + calc_relative_var( estimates = ATE_hat, var_estimates = SE_hat^2, + criteria = "relative bias" ), + power = mean( p_value <= 0.05 ), + ESE_hat = sqrt( mean( SE_hat^2 ) ), + SD_SE_hat = sqrt( sd( SE_hat^2 ) ), + ) %>% + rename( + R = K_absolute, + RMSE = rmse, + RMSE_mcse = rmse_mcse, + SE = stddev, + SE_mcse = stddev_mcse + ) %>% + dplyr::select( -K_relvar ) %>% + ungroup() + +sres + + +#### The meta analysis funnel plots #### + +# For Bias + +ggplot( sres, aes( bias_mcse, bias, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + geom_abline( slope=2, intercept=0, col="darkgrey", lty=2 ) + + geom_abline( slope=-2, intercept=0, col="darkgrey", lty=2 ) + + theme_minimal() + +summary( sres$bias_mcse ) + + +# For SE + +# NOTE: Not useful since we don't expect these SE values to be +# centered around any common value---we would need to somehow subtract +# out their expected values to see the residuals scatter, I think? +ggplot( sres, aes( SE_mcse, SE, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + theme_minimal() + +summary( sres$SE_mcse / sres$SE ) + + +# Is smoothing and then looking at residuals better? + +# This would be asking: Given reasonablly precise SE_mcse estimates, +# we have a sense of what the true SE should be, roughly. We then see +# if the deviation from that is larger than expected? +M_se = loess( SE ~ SE_mcse, data=sres ) +sres$SE_fitted = predict( M_se ) +sres$SE_resid = sres$SE - sres$SE_fitted +summary( sres$SE_resid ) + +ggplot( sres, aes( SE_mcse, SE_resid, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + geom_abline( slope=2, intercept=0, col="darkgrey", lty=2 ) + + geom_abline( slope=-2, intercept=0, col="darkgrey", lty=2 ) + + theme_minimal() + +# Is anything to be learned here? + + +# For RMSE +# Also broken due to the SE reason, above +ggplot( sres, aes( RMSE_mcse, RMSE, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + theme_minimal() + +summary( sres$RMSE_mcse / sres$RMSE ) + + + + +#### Initial attempt to fit multilevel model on the raw data #### + + +if ( FALSE ) { + library( lme4 ) + res + res$err = res$ATE_hat - res$ATE + table( table( res$seed ) ) + nrow(sres) / 3 + + M = lmer( err ~ 1 + method + (0+method|seed) + (1|seed:runID), + data = res ) + + + M = lmer( err ~ 1 + (as.factor(size_coef)*as.factor(alpha) + ICC + as.factor(n_bar) + as.factor(J) ) * method + (1+method|seed) + (1|seed:runID), + data = res ) + + arm::display(M) + VarCorr(M) + a = coef(M)$seed %>% + as.data.frame() + a$seed = rownames(a) + head(a) + + aL <- a %>% + pivot_longer( cols = -c( seed, `(Intercept)` ), + names_to = "method", + values_to = "bias_method" ) + sres + a = left_join( a, sres, by="seed" ) + +} + +#------------------------------------------------------------------------------- +# random effects meta-analysis of bias per method +library(metafor) + +sres %>% + filter(method == "MLM") %>% + rma.uni( + yi = bias, sei = bias_mcse, + mods = ~ as.factor(size_coef) * as.factor(alpha) * ICC + as.factor(n_bar) + as.factor(J), + data = . + ) + +RE_shrink <- function(dat) { + RE_fit <- rma.uni( + yi = bias, sei = bias_mcse, + mods = ~ as.factor(size_coef) * as.factor(alpha) * ICC + as.factor(n_bar) + as.factor(J), + data = dat + ) + + shrunk_bias <- blup(RE_fit) + data.frame(shrunk_bias = shrunk_bias$pred, shrunk_bias_mcse = shrunk_bias$se) +} + +sres_shrunken <- + sres %>% + group_nest(method) %>% + mutate( + shrunk_bias = map(data, RE_shrink) + ) %>% + unnest(cols = c(data, shrunk_bias)) + +ggplot( sres_shrunken, aes( bias_mcse, bias, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + geom_abline( slope=2, intercept=0, col="darkgrey", lty=2 ) + + geom_abline( slope=-2, intercept=0, col="darkgrey", lty=2 ) + + scale_x_continuous(limits = c(0, 0.1)) + + scale_y_continuous(limits = c(-0.2, 0.2)) + + theme_minimal() + +ggplot( sres_shrunken, aes( shrunk_bias_mcse, shrunk_bias, col=as.factor(size_coef) )) + + facet_grid( alpha ~ method ) + + geom_point() + + geom_abline( slope=2, intercept=0, col="darkgrey", lty=2 ) + + geom_abline( slope=-2, intercept=0, col="darkgrey", lty=2 ) + + scale_x_continuous(limits = c(0, 0.1)) + + scale_y_continuous(limits = c(-0.2, 0.2)) + + theme_minimal() + From 11fcd5587e1d81722a68514292f4f6a0fbabd60e Mon Sep 17 00:00:00 2001 From: lmiratrix Date: Fri, 31 Oct 2025 13:30:49 -0400 Subject: [PATCH 07/10] Revised chapter 12 (making viz), and did minor stuff to the other surrounding chapters. --- 072-presentation-of-results.Rmd | 46 +-- 074-building-good-vizualizations.Rmd | 431 ++++++++++++++++----------- 075-special-topics-on-reporting.Rmd | 295 ++++++++++++------ 3 files changed, 491 insertions(+), 281 deletions(-) diff --git a/072-presentation-of-results.Rmd b/072-presentation-of-results.Rmd index fcf5345..86c0ef8 100644 --- a/072-presentation-of-results.Rmd +++ b/072-presentation-of-results.Rmd @@ -583,14 +583,22 @@ We might expect, for example, that for all methods the true standard error goes Meta-regressions would also typically include interactions between method and factor, to see if some factors impact different methods differently. They can also include interactions between simulation factors, which allows us to explore how the impact of a factor can matter more or less, depending on other aspects of the context. +Using meta regresion can also account for simulation uncertainty in some contexts, which can be especially important when the number of iterations per scenario is low. +See @gilbert2024multilevel for more on this. -### Example 1: Biserial, revisited -For example, consider the bias of the biserial correlation estimates from above. -Visually, we see that several factors appear to impact bias, but we might want to get a sense of how much. -In particular, how much does the population vs sample cutoff option matter for bias, across all the simulation factors considered? +### Example 1: Biserial, revisited +In the biserial correlation example above, we saw that bias can change notably across scenarios considered, and that several factors appear to be driving these changes. +These factors also seem to have complex interactions: note how when p1 = 0.5, we get larger dips than when p1 = 1/8. +The figure gives a sense of this complex, rich story, but we might also want to summarize our results to get a sense of overall trends, so we can provide a simpler story of what is going on. +We also might want to get a sense of the relative importance of various factors and their interactions. +For example, we might ask how much the population (top row) vs. sample (bottom row) cutoff option matters for bias, across all the simulation factors considered. +Is it a primary driver of when there is a lot of bias, or just one of many players of roughly equal import? + ```{r setup_modeling_demonstration, warning=FALSE, include=FALSE} options(scipen = 5) mod = lm( bias ~ fixed + rho + I(rho^2) + p1 + n, data = r_F) @@ -598,6 +606,7 @@ broom::tidy(mod) %>% knitr::kable( digits = c( 0,4,4,1,2 ) ) ``` + -We can use ANOVA to decompose the variation in bias into components predicted by various combinations of the simulation factors. -Using ANOVA we can identify which factors have negligible/minor influence on the bias of an estimator, and which factors drive the variation we see. -We can then summarise our anova table to see the contribution of the various factors and interactions to the total amount of variation in performance: +ANOVA helps answer these sorts of questions. +In particular, with ANOVA, we can decompose how much bias changes across scenarios into components predicted by various combinations of the simulation factors. +We can do this with the `aov()` function in R, which is a wrapper around `lm()` that is designed for ANOVA. +We first fit a model regressing bias on all interactions of our four simulation factors. +In the R formula syntax, our model is `bias ~ rho * p1 * fixed * n`. + +The sum of squares ANOVA decomposition then provides a means for identifying which factors have negligible/minor influence on the bias of an estimator, and which factors drive the variation we see. +For example, the following "eta table" gives the contribution of the various factors and interactions to the total amount of variation in bias across scenarios: ```{r, warning=FALSE, echo=FALSE} anova_table <- aov(bias ~ rho * p1 * fixed * n, data = r_F) @@ -627,7 +641,7 @@ etaSquared(anova_table) %>% knitr::kable( digits = 2 ) ``` -Here we see which factors are explaining the most variation. E.g., `p1` is explaining 21% of the variation in bias across simulations. +The table shows which factors are explaining the most variation. E.g., `p1` is explaining 21% of the variation in bias across simulations. The contribution of any of the three- or four-way interactions are fairly minimal, by comparison, and could be dropped to simplify our model. Modeling summarizes overall trends, and ANOVA allows us to identify what factors are relatively more important for explaining variation in our performance measure. @@ -638,10 +652,10 @@ We could fit a regression model or ANOVA model for each performance measure in t @lee2023comparing were interested in evaluating how different modeling approaches perform when analyzing cross-classified data structures. To do this they conducted a multi-factor simulation to compare three methods: a method called CCREM, two-way OLS with cluster-robust variance estimation (CRVE), and two-way fixed effects with CRVE. The simulation was complex, involving several factors, so they fit an ANOVA model to understand which factors had the most influence on performance. -In particular, they ran _four_ multifactor simulations, each in a different set of conditions. +In particular, they ran _four_ multifactor simulations, each under a different broader context (those being assumptions met, homoscedasticity violated, exogeneity violated, and presence of random slopes). They then used ANOVA to explore how the simulation factors impacted bias within each of these contexts. -One of their tables in the supplementary materials (Table S5.2, see [here](https://osf.io/hy73g), page 20, and reproduced below) shows the results of these four ANOVA models, with each column being a simulation context (those being assumptions met, homoscedasticity violated, exogeneity violated, and presence of random slopes), and the rows corresponding to factors manipulated within the simulation. +One of their tables in the supplementary materials (Table S5.2, see [here](https://osf.io/hy73g), page 20, and reproduced below) shows the results of these four ANOVA models, with each column being a simulation context, and the rows corresponding to factors manipulated within that context. Small, medium, and large effects are marked to make them jump out to the eye. **ANOVA Results on Parameter Bias** @@ -668,7 +682,7 @@ Small, medium, and large effects are marked to make them jump out to the eye. We see that when model assumptions are met or only homoscedasticity is violated, choice of method (CCREM, two-way OLS-CRVE, FE-CRVE) has almost no impact on parameter bias ($\eta^2 = 0.000$ to 0.006). -However, under an exogeneity violation, method choice has a large effect ($\eta^2 = 0.995$), indicating that some methods (like OLS-CRVE) have much more bias than others. +However, under an exogeneity violation, method choice has a large effect ($\eta^2 = 0.995$), indicating that some methods (e.g., OLS-CRVE) have much more bias than others. Other factors such as the effect size of the parameter and the number of schools can also show moderate-to-large impacts on bias in several conditions. The table also shows how an interaction between simulation factors can matter. @@ -676,20 +690,17 @@ For example, interactions between method and number of schools, or students per Overall, the table shows how some aspects of the DGP matter more, and some less. -Using meta regresion can also account for simulation uncertainty in some contexts, which can be especially important when the number of iterations per scenario is low. -See @gilbert2024multilevel for more on this. ## Reporting -The final form of your report will typically -For your final write-up, you will not want to present everything. -A wall of numbers and observations only serves to pummel the reader. +There is a difference in the results you will generate so you can understand what is going on in your simulation, and the results that you will include in an outward facing report. +Do not pummel your reader with a deluge of tables, figures, and observations. Instead, present selected results that clearly illustrate the main findings from the study, along with anything unusual or anomalous. Your presentation will typically be best served with a few well-chosen figures. Then, in the text of your write-up, you might include a few specific numerical comparisons. Do not include too many of these, and be sure to say why the numerical comparisons you include are important. -To form these final exhibits, you will likely have to generate a wide range of results that show different facets of your simulation. +To form your final exhibits, you will likely have to generate a wide range of results that show different aspects of your simulation. These are for you, and will help you deeply understand what is going on. You then try to simplify the story, in a way that is honest and transparent, by curating this full set of figures to your final ones. Some of the remainder will then become supplementary materials that contain further detail to both enrich your main narrative and demonstrate that you are not hiding anything. @@ -703,3 +714,4 @@ People will naturally think, "if that researcher is so willing to let me see wha + diff --git a/074-building-good-vizualizations.Rmd b/074-building-good-vizualizations.Rmd index f482d6b..206328d 100644 --- a/074-building-good-vizualizations.Rmd +++ b/074-building-good-vizualizations.Rmd @@ -39,7 +39,8 @@ sres <- res %>% RMSE_mcse = rmse_mcse, SE = stddev, SE_mcse = stddev_mcse ) %>% - dplyr::select( -K_relvar ) + dplyr::select( -K_relvar ) %>% + ungroup() sres # 1000 iterations per factor @@ -49,21 +50,20 @@ summary( sres$R ) # Building good visualizations {#building-good-visualization} Visualization should nearly always be the first step in analyzing simulation results. -In the prior chapter, we saw a series of visualizations that showed overall trends across a variety of examples. +In the prior chapter, we saw a variety of examples primarily taken from published work. Those visualizations were not the initial ones created for those research projects. -In practice, making a visualization often requires creating a _bunch_ of graphs to look at different aspects of the data. -From that pile of graphs, you would then refine ones that communicate the overall results most cleanly, and include those in your main write-up. +In practice, getting to a good visualization often requires creating _many_ different graphs to look at different aspects of the data. +From that pile of graphs, you would then curate and refine those that communicate the overall results most cleanly. -In our work, we find we often generate a series of R Markdown reports with comprehensive simulation results targeting our various research questions. +In our work, we find we often generate a series of R Markdown reports with comprehensive sets of charts targeting our various research questions. These initial documents are then discussed internally by the research team. -In this chapter we discuss a set of common tools that we frequently use to explore our simulation results. -In particular, we focus on four essential tools: +In this chapter we first discuss four essential tools that we frequently use to make these initial sets of graphs: 1. **Subsetting**: Multifactor simulations can be complex and confusing. Sometimes it is easier to first explore a subset of the simulation results, such as a single factor level. 2. **Many small multiples**: Plot many results in a single plot, with facets to break up the results by simulation factors. -3. **Bundling**: Group the results by a primary factor of interest, and then plotting the performance measure as a boxplot so you can see how much variation there is within that factor level. -4. **Aggregation**: Average the performance measure across some of the simulation factors, so you can see overall trends. +3. **Bundling**: Group the results by a primary factor of interest, and then plot the performance measure as a boxplot so you can see how much variation there is within that factor level. +4. **Aggregation**: Average the performance measure across some of the simulation factors, so you can see overall trends with respect to the remaining factors. Subsetting is a very useful tool, especially when the scope of the simulation feels overwhelming. -And as we just saw, it can also be used as a quick validity check: we can subset to a known context where we know nothing exciting should be happening, and then check that indeed nothing is there. +And as we just saw, it can also be used as a quick validity check: subset to a known context where we know nothing exciting should be happening to verify that indeed nothing is there. -Subsetting allows for a deep dive into a specific context. -It also can make it easier to think through what is happening in a complex context. -Sometimes we might even just report a subset in our final analysis. -In this case, we would consider the other levels as a "sensitivity" analysis vaguely alluded to in our main report and placed elsewhere, such as an online supplemental appendix. +Subsetting allows for a deep dive into specific context. +It also can make it easier to think through what is happening in a complex context; think of it as a flashlight, shining attention on one part of your overall simulation or another, to focus attention and reduce complexity. +Sometimes we might even just report the results for a subset in our final analysis and put the analysis of the the remaining scenarios elsewhere, such as an online supplemental appendix. +In this case, it would then be our job to verify that our reported findings on the main results indeed were echoed in the set-aside runs. -It would be our job, in this case, to verify that our reported findings on the main results indeed were echoed in our other, set-aside, simulation runs. -In our case, as we see below, we will see little effect of the ICC on how one model performs relative to another; we thus might be able to safely ignore the ICC factor in our main report. + -Subsetting is useful, but if you do want to look at all your simulation results at once, you need to somehow aggregate your results to make them all fit on the plot. -We next present bundling, a way of using the core idea of small multiples for showing all of the raw results, but in a semi-aggregated way. +Subsetting is useful, but if you do want to look at all your simulation results at once, you need to somehow aggregate or group your results to make them all fit on the plot. +We next present bundling, a way of keeping the core idea of small multiples to show all of the raw results, but now in a semi-aggregated way. ## Bundling When faced with many simulation factors, we can _bundle_ the simulations into groups defined by a selected primary factor of interest, and then plot each bundle with a boxplot of the distribution of a selected performance criteria. -Each boxplot shows the central measure of how well an estimator worked across a set of scenarios, along with a sense of how much that performance varied across those scenarios. +Each boxplot shows the central measure of how well an estimator worked across a set of scenarios, along with a sense of how much that performance varied across those scenarios in the box. If the boxes are narrow, then we know that the variation across simulations within the box did not impact performance much. If the boxes are wide, then we know that the factors that vary within the box matter a lot for performance. With bundling, we generally need a good number of simulation runs per scenario, so that the MCSE in the performance measures does not make our boxplots look substantially more variable (wider) than the truth. -(Consider a case where all the scenarios within a box have zero bias; if MCSE were large, we would see a wide boxplot when we should not.) +Consider a case where all the scenarios within a box have zero _true_ bias; if the MCSE were large, the _estimated_ biases would still vary and we would see a wide boxplot when we should not. -To illustrate bundling, we group our Cluster RCT results by method, ICC, the size coefficient (how strong the cluster size to treatment impact relationship is), and alpha (how much the cluster sizes vary). -For a specific ICC, size, and alpha, we will put the boxes for the three methods side-by-side to directly compare them: +To illustrate bundling, we replicate our small subset figure from above, but instead of each point (with a given `J', `alpha`, and `size_coef`) just being the single scenario with `n_bar=80` and `ICC = 0.20`, we plot all the scenarios in a boxplot at that location. +We put the boxes for the three methods side-by-side to directly compare them: ```{r clusterRCT_plot_bias_v1} -ggplot( sres, aes( as.factor(alpha), bias, col=method, group=paste0(ICC,method) ) ) + - facet_grid( size_coef ~ ICC, labeller = label_both ) + - geom_boxplot(coef = Inf) + +ggplot( sres, aes( as.factor(J), bias, col=method, + group=paste0(method, J) ) ) + + facet_grid( size_coef ~ alpha, labeller = label_both ) + + geom_boxplot( coef = Inf, width=0.7, fill="grey" ) + geom_hline( yintercept = 0 ) + theme_minimal() + ``` -Each box is a collection of simulation trials. E.g., for `ICC = 0.6`, `size_coef = 0.2`, and `alpha = 0.8` each of the three boxes contains 9 scenarios representing the varying level 1 and level 2 sample sizes. -Here are the 9 for the Aggregation method: + + +All of our simulation trials are represented in this plot. +Each box is a collection of simulation trials. E.g., for `J = 5`, `size_coef = 0`, and `alpha = 0.8` each of the three boxes contains 15 scenarios representing the varying ICC and cluster size. +Here are the 15 results in the top right box for the Aggregation method: ```{r} filter( sres, - ICC == 0.6, - size_coef == 0.2, - alpha == 0.8, method=="Agg" ) %>% - dplyr::select( n_bar:alpha, bias ) %>% + J == 5, + size_coef == 0, + alpha == 0.8, + method=="Agg" ) %>% + dplyr::select( n_bar, J, size_coef, ICC, alpha, bias, bias_mcse ) %>% + arrange( bias ) %>% knitr::kable( digits = 2 ) ``` Our bias boxplot makes some trends clear. -For example, we see that there is virtually no bias for any method when the size coefficient is 0 and the ICC is 0. -It is a bit more unclear, but it seems there is also virtually no bias when the size coefficient is 0 regardless of ICC, but the boxes get wider as ICC increases, making us wonder if something else is potentially going on. -When alpha is 0 and the size coefficient is 0.2, all methods have a negative bias for most scenarios considered, as all boxes and almost all of the whiskers are below the 0 line (when ICC is 0.6 or 0.8 we may have some instances of 0 or positive bias, if that is not MCSE giving long tails). +For example, we see that there is no bias, on average, for any method when the size coefficient is 0 and alpha is 0, especially when $J = 80$. + +When the size coefficient is 0.2, we also see LR jump out from the others when `alpha` is not 0. -The apparent outliers (long tails) for some of the boxplots suggest that the other two factors (cluster size and number of clusters) do relate to the degree of bias. We could try bundling along different aspects to see if that explains these differences: +The apparent outliers (long tails) for some of the boxplots suggest that the two remaining factors (ICC and cluster size) could relate to the degree of bias. +They could also be due to MCSE, and given that we primariy see these tails when $J$ is small, this is a real concern. +MCSE aside, a long tail means that some scenario in the box had a high level of estimated bias. +We could try bundling along different aspects to see if either of the remaining factors (e.g., ICC) explains these differences. +Here we try bundling cluster size and number of clusters. ```{r clusterRCT_plot_bias_v2} -ggplot( sres, aes( as.factor(n_bar), bias, col=method, group=paste0(n_bar,method) ) ) + - facet_grid( alpha ~ size_coef, labeller = label_both ) + - geom_boxplot(coef = Inf) + +ggplot( sres, aes( as.factor(alpha), bias, col=method, + group=paste0(method, alpha) ) ) + + facet_grid( size_coef ~ ICC, labeller = label_both ) + + geom_boxplot( coef = Inf, width=0.7, fill="grey" ) + geom_hline( yintercept = 0 ) + - theme_minimal() + theme_minimal() ``` -No progress there; we have long tails suggesting something is allowing for large bias in some contexts. -This could be MCSE, with some of our bias estimates being large due to random chance. -Or it could be some specific combination of factors allows for large bias (e.g., perhaps small sample sizes makes our estimators more vulnerable to bias). +We have some progress now: the long tails are primarily when the ICC is high, but we also see that MLM has bias with ICC is 0, if alpha is nonzero. + +We know things are more unstable in smaller samples sizes, so the tails could still be MCSE, with some of our bias estimates being large due to random chance. +Or perhaps there is still some specific combination of factors that allow for large bias (e.g., perhaps small sample sizes makes our estimators more vulnerable to bias). In an actual analysis, we would make a note to investigate these anomalies later on. -In general, playing around with factors so that the boxes are generally narrow is a good idea; it means that you have found a representation of the data where the variation within your bundles is less important. +In general, trying to group your simulation scenarios so that their boxes are generally narrow is a good idea; narrow boxes means that you have found a representation of the data where you know what is driving the variation in your performance measure, and that the factors bundled inside the boxes are less important. This might not always be possible, if all your factors matter; in this case the width of your boxes tells you to what extent the bundled factors matter relative to the factors explicitly present in your plot. +One might wonder, with only few trials per box, whether we should instead look at the individual scenarios. +Unfortunately, that gets a bit cluttered: + +```{r} +ggplot( sres, aes( as.factor(alpha), bias, col= method, + group=paste0(alpha,ICC,method) ) ) + + facet_grid( size_coef ~ ICC, labeller = label_both ) + + geom_point( size = 0.5, + position = position_dodge(width=0.7 ) ) + + geom_hline( yintercept = 0 ) + + theme_minimal() +``` + +Using boxplots, even over such a few number of points, notably clarifies a visualization. ## Aggregation Boxplots can make seeing trends more difficult, as the eye is drawn to the boxes and tails, and the range of your plot axes can be large due to needing to accommodate the full tails and outliers of your results; this can compress the mean differences between groups, making them look small. +They can also be artificially inflated, especially if the MCSEs are large. Instead of bundling, we can therefore aggregate, where we average all the scenarios within a box to get a single number of average performance. This will show us overall trends rather than individual simulation variation. @@ -249,33 +290,25 @@ Our conclusions would then be more general: if we had not explored more scenario That said, if some of our scenarios had no bias, and some had large bias, when we aggregated we would report that there is generally a moderate amount of bias. This would not be entirely faithful to the actual results. -If, however, the initial boxplots show results generally in one direction or another, then aggregation will be more faithful to the spirit of the results. - -Also, aggregated results can be misleading if you have scaling issues or extreme outliers. -With bias, our scale is fairly well set, so we are good. -But if we were aggregating standard errors over different sample sizes, then the larger standard errors of the smaller sample size simulations (and the greater variability in estimating those standard errors) would swamp the standard errors of the larger sample sizes. -Usually, with aggregation, we want to average over something we believe does not change massively over the marginalized-out factors. -To achieve this, we can often average over a relative measure (such as standard error divided by the standard error of some baseline method), which tend to be more invariant and comparable across scenarios. +But when the initial boxplots show results generally in one direction or another, then aggregation can be quite faithful to the spirit of the results. A major advantage of aggregation over the bundling approach is we can have fewer replications per scenario. If the number of replicates within each scenario is small, then the performance measures for each scenario is estimated with a lot of error; the aggregate, by contrast, will be an average across many more replicates and thus give a good sense of _average_ performance. The averaging, in effect, gives a lot more replications per aggregated performance measure. - For our cluster RCT, we might aggregate our bias across our sample sizes as follows: ```{r} ssres <- sres %>% - group_by( method, ICC, alpha, size_coef ) %>% - summarise( bias = mean( bias ), - n = n() ) + group_by( method, size_coef, J, alpha ) %>% + summarise( bias = mean( bias ) ) ``` -We now have a single bias estimate for each combination of ICC, alpha, and size_coef; we have collapsed 9 scenarios into one overall scenario that generalizes bias across different sizes of experiment. +We now have a single bias estimate for each combination of size_coef, J, and alpha; we have collapsed 15 scenarios into one overall scenario that generalizes bias across different average cluster sizes and different ICCs. We can then plot, using many small multiples: ```{r agg_bias_plot_clusterRCT} -ggplot( ssres, aes( ICC, bias, col=method ) ) + +ggplot( ssres, aes( as.factor(J), bias, col=method, group=method ) ) + facet_grid( size_coef ~ alpha, labeller = label_both ) + geom_point( alpha=0.75 ) + geom_line( alpha=0.75 ) + @@ -283,14 +316,33 @@ ggplot( ssres, aes( ICC, bias, col=method ) ) + theme_minimal() ``` -We see more clearly that greater variation in cluster size (alpha) leads to greater bias for the linear regression estimator, but only if the coefficient for size is nonzero (which makes sense given our theoretical understanding of the problem---if size is not related to treatment effect, it is hard to imagine how varying cluster sizes would cause much bias). +We now see quite clearly that as `alpha` grows, linear regression gets more biased if cluster size relates to average impact in the cluster (`size_coef`). +Our finding makes sense given our theoretical understanding of the problem---if size is not related to treatment effect, it is hard to imagine how varying cluster sizes would cause much bias. + We are looking at an interaction between our simulation factors: we only see bias for linear regression when cluster size relates to impact and there is variation in cluster size. -As ICC increases, we are not seeing any major differences in the pattern of our results -We also see that all the estimators have near zero bias when there is no variation in cluster size, with the overplotted lines on the top row of the figure. +We also see that all the estimators have near zero bias when there is no variation in cluster size or the cluster size does not relate to outcome, as shown by the top row and left column facets. +Finally, we see the methods all likely give the same answers when there is no cluster size variation, given the overplotted lines on the left column of the figure. + +We might take this figure as still too complex. +So far we have learned that MLM does seem to react to ICC, and that LR reacts to `alpha` and `size_coef` in combination. +More broadly, with many levels of a factor, as we have with ICC, we can let ggplot aggregate directly by taking advantage of `geom_smooth()`. +This leads to the following: + +```{r, echo=FALSE, message=FALSE} +ggplot( sres, aes( ICC, bias, col=as.factor(alpha), + group=interaction(method,alpha) ) ) + + facet_grid( size_coef ~ method, labeller = label_both ) + + geom_smooth( alpha=0.75, se=FALSE, method="loess", span=1.5 ) + + geom_hline( yintercept = 0 ) + + theme_minimal() +``` + +Our story is fairly clear now: LR is biased when alpha is large and the cluster size relates to impact. +MLM can be biased when ICC is low, if cluster size relates to impact (this is because it is driving towards person-weighting when there is little cluster variation). -If you have many levels of a factor, as we do with ICC, you can let ggplot aggregate directly by taking advantage of the smoothing options: -```{r, message=FALSE, warning=FALSE} + + +Aggregation is powerful, but it can be misleading if you have scaling issues or extreme outliers. +With bias, our scale is fairly well set, so we are good. +But if we were aggregating standard errors over different sample sizes, then the larger standard errors of the smaller sample size simulations (and the greater variability in estimating those standard errors) would swamp the standard errors of the larger sample sizes. +Usually, with aggregation, we want to average over something we believe does not change massively over the marginalized-out factors. +To achieve this, we can often average over a relative measure (such as standard error divided by the standard error of some baseline method), which tend to be more invariant and comparable across scenarios. +We will see more examples of this kind of aggregation later on. -#### A note on how to aggregate +#### Some notes on how to aggregate Some performance measures are biased with respect to the Monte Carlo uncertainty. The estimated standard error, for example, is biased; the variance, by contrast, is not. @@ -318,20 +379,26 @@ agg_perf <- sres %>% summarise( SE = sqrt( mean( SE^2 ) ) ) ``` -Because bias is linear, you do not need to worry about the bias of the standard error. +Because bias is linear, you do not need to worry about the MCSE. But if you are looking at the magnitude of bias ($|bias|$), then you can run into issues when the biases are close to zero, if they are measured noisily. -In this case, looking at average bias, not average $|bias|$, is safer. +For example, imagine you have two scenarios with true bias of 0.0, but your MCSE is 0.02. +In one scenario, you estimate a bias of 0.017, and in the other -0.023. +If you average the estimated biases, you get -0.003, which suggests a small bias as we would wish. +Averaging the absolute biases, on the other hand, gives you 0.02, which could be deceptive. +With high MCSE and small magnitudes of bias, looking at average bias, not average $|bias|$, is safer. +Alternatively, you can use the formula $RMSE^2 = Bias^2 + SE^2$ to back out the average absolute bias from the RMSE and SE. -## Assessing true SEs -We just did a deep dive into bias. +## Comparing true SEs with standardization + +We just did a deep dive into bias. Uncertainty (standard errors) is another primary performance criterion of interest. As an initial exploration, we plot the standard error estimates from our Cluster RCT simulation, using smoothed lines to visualize trends. We use `ggplot`'s `geom_smooth` to aggregate over `size_coef` and `alpha`, which we leave out of the plot. We include individual data points to visualize variation around the smoothed estimates: -```{r} +```{r, message=FALSE} ggplot( sres, aes( ICC, SE, col=method ) ) + facet_grid( n_bar ~ J, labeller = label_both ) + geom_jitter( height = 0, width = 0.05, alpha=0.5 ) + @@ -350,8 +417,8 @@ While we can extract all of these from the figure, the figure is still not ideal The dominant influence of design features like ICC and sample size obscures our ability to detect meaningful differences between methods. In other words, even though SE changes across scenarios, it’s difficult to tell which method is actually performing better within each scenario. -We can also view the same information using boxplots, effectively "bundling" over the left-out dimensions. -We also put `n_bar` in our bundles because maybe it does not matter that much: +We can also view the same information by bundling over the left-out dimensions. +We put `n_bar` in our bundles because maybe it does not matter that much: ```{r} ggplot( sres, aes( ICC, SE, col=method, group=paste0( ICC, method ) ) ) + facet_grid( . ~ J, labeller = label_both ) + @@ -373,13 +440,10 @@ we want to conclude that: Simulation results are often driven by broad design effects, which can obscure the specific methodological questions we care about. Standardizing helps bring those comparisons to the forefront. Let's try that next. -#### Standardizing to compare across simulation scenarios ##### - One straightforward strategy for standardization is to compare each method’s performance to a designated baseline. In this example, we use Linear Regression (LR) as our baseline. -We focus on the standard error (SE) of each method’s estimate, rescaling it relative to LR. -We do this by, for each simulation scenario, dividing each method’s SE by the SE of LR, to produce `SE.scale`. -This relative measure, SE.scale, allows us to examine how much better or worse each method performs in terms of precision under varying conditions. +We standardize by, for each simulation scenario, dividing each method’s SE by the SE of LR, to produce `SE.scale`. +This relative measure, `SE.scale`, allows us to examine how much better or worse, across our scenarios, each method performs relative to a chosen reference method. ```{r} ssres <- @@ -389,8 +453,8 @@ ssres <- ungroup() ``` -We can then treat it as a measure like any other. -Here we bundle: +We can then treat `SE.scale` as a measure like any other. +Here we bundle, showing how relative SE changes by J, `n_bar` and ICC: ```{r} ggplot( ssres, aes( ICC, SE.scale, col=method, @@ -400,77 +464,71 @@ ggplot( ssres, aes( ICC, SE.scale, col=method, scale_y_continuous( labels = scales::percent_format() ) ``` -The figure above shows how each method compares to LR across simulation scenarios. We see that Aggregation performs worse than LR when the Intraclass Correlation Coefficient (ICC) is zero. However, when ICC is greater than zero, Aggregation yields improved precision. -The Multilevel Model (MLM), in contrast, appears more adaptive. It captures the benefits of aggregation when ICC is high, but avoids the precision cost when ICC is zero. This adaptivity makes MLM appealing in practice when ICC is unknown or variable across contexts. +The figure above shows how each method compares to LR across simulation scenarios. +Aggregation clearly performs worse than LR when the Intraclass Correlation Coefficient (ICC) is zero. However, when ICC is greater than zero, Aggregation yields improved precision. +The Multilevel Model (MLM), in contrast, appears more adaptive. +It captures the benefits of aggregation when ICC is high, but avoids the precision cost when ICC is zero. +This adaptivity makes MLM appealing in practice when ICC is unknown or variable across contexts. -Although faceting by n_bar and J helps reveal potential interaction effects, it may be more effective to collapse across these variables for a cleaner summary. -We are also not seeing how site size variation is impacting these results, and we might think that matters, especially for aggregation. - -As a warning regarding Monte Carlo uncertainty: when standardizing results, it is important to remember that uncertainty in the baseline measure (here, LR) propagates to the standardized values. This should be considered when interpreting variability in the scaled results. -Uncertaintly for relative performance is generally tricky to assess. - -To clarify the main patterns, we average our SE.scale across simulation settings---relative performance is on the same scale, so averaging is a natural thing to do. - -```{r} -s2 <- - ssres %>% - group_by( ICC, alpha, method ) %>% - summarise( SE.scale = mean( SE.scale ) ) - -ggplot( s2, aes( ICC, SE.scale, col=method ) ) + - facet_wrap( ~ alpha ) + - geom_point() + geom_line() + - scale_y_continuous( labels = scales::percent_format() ) + - labs( title = "Average relative SE to Linear Regression", - y = "Relative Standard Error" ) -``` - -Our aggregated plot of precision of aggregation and MLM relative to Linear Regression gives a simple story clearly told. -The performance of aggregation improves with ICC. -MLM also has benefits over LR, and does not pay much cost when ICC is low. - -We can also visualize the variability in relative standard errors across simulation scenarios using boxplots. -This allows us to examine how consistent each method’s performance is under different ICC conditions. +In looking at the plot we are seeing essentially identical rows and and fairly similar across columns. +This suggests we should bundle the `n_bar` to get a cleaner view of the main patterns, and that we can also bundle over `J` as well. +We finally drop the `LR` results entirely, as it is the reference method and always has a relative SE of 1. ```{r} ssres %>% filter( method != "LR" ) %>% ggplot( aes( ICC, SE.scale, col=method, group = interaction(ICC, method) ) ) + - facet_wrap( ~ alpha, nrow=1) + + facet_grid( size_coef ~ alpha, labeller = label_both ) + geom_hline( yintercept = 1 ) + geom_boxplot( position="dodge", width=0.1 ) + + scale_x_continuous( breaks=unique( ssres$ICC ) ) + scale_y_continuous( breaks=seq(90, 125, by=5 ) ) ``` -These boxplots show the full distribution of relative standard errors across all simulated scenarios, separated by ICC level. We exclude LR as it is the reference method. - The pattern is clear: when ICC = 0, Aggregation performs worse than LR, and MLM performs about the same. But as ICC increases, Aggregation and MLM both improve, and perform about the same to each other. This highlights the robustness of MLM across diverse conditions. -We might also explore how uncertainty changes with other factors. -Here, we see whether cluster size meaningfully helps: + +As a warning regarding Monte Carlo uncertainty: when standardizing results, it is important to remember that uncertainty in the baseline measure (here, LR) propagates to the standardized values. +This should be considered when interpreting variability in the scaled results. +Uncertainty for relative performance is generally tricky to assess. + + +To clarify the main patterns, we then aggregate our SE.scale across the bundled simulation settings---relative performance is on the same scale, so averaging is now a natural thing to do. +We have aggregated out sample sizes, and we go further and remove `size_coef` since it does not seem to matter much, given the above plot: ```{r} -sres %>% - filter( alpha == 0.8, size_coef == 0.2 ) %>% -ggplot( aes( n_bar, SE, col=factor(ICC), group=ICC ) ) + - facet_grid( J ~ method, labeller = label_both, scales = "free") + - geom_point() + geom_line() +s2 <- + ssres %>% + group_by( ICC, alpha, method ) %>% + summarise( SE.scale = mean( SE.scale ) ) %>% + filter( method != "LR" ) + +ggplot( s2, aes( ICC, SE.scale, col=method ) ) + + facet_wrap( ~ alpha, labeller = label_both ) + + geom_hline( yintercept = 1 ) + + geom_point() + geom_line() + + scale_y_continuous( labels = scales::percent_format() ) + + labs( title = "Average relative SE to Linear Regression", + y = "Relative Standard Error" ) ``` -If the ICC is low, cluster size matters. Otherwise, the benefits are much more slim. +Our aggregated plot of the precision of aggregation and MLM relative to Linear Regression gives a simple story clearly told. +The performance of aggregation improves with ICC. +MLM also has benefits over LR, and does not pay much cost when ICC is low. ## The Bias-SE-RMSE plot -We can also visualize bias and standard error together, along with RMSE, to get a fuller picture of performance. -To illustrate, we subset to our biggest scenarios, in terms of sample size, and no ICC: +We can visualize bias and standard error together, along with RMSE, to get a rich picture of performance. +To illustrate, we subset to our scenarios where there is real bias for both LR and MLM (i.e., when ICC is 0; see findings under bias from above). +We also subset to our middle values of `n_bar = 80` and our large `J=80`, where uncertainty is small and thus the relative role of bias may be large. ```{r} bsr <- sres %>% - filter( n_bar == 320, J==80, ICC == 0 ) + filter( n_bar == 80, J==80, ICC == 0 ) bsr <- bsr %>% dplyr::select( -R, -power, -ESE_hat, -SD_SE_hat ) %>% @@ -481,10 +539,12 @@ bsr <- bsr %>% summarise( value = mean( value ), n = n() ) -bsr$measure = factor( bsr$measure, levels=c("bias", "SE", "RMSE"), - labels =c("|bias|", "SE", "RMSE" ) ) +bsr$measure = factor( bsr$measure, + levels=c("bias", "SE", "RMSE"), + labels =c("bias", "SE", "RMSE" ) ) -ggplot( bsr, aes( alpha, value, col=method )) + +ggplot( bsr, aes( as.factor(alpha), value, col=method, + group = method )) + facet_grid( size_coef ~ measure ) + geom_line() + geom_point() + labs( y = "", x = "Site Variation" ) + @@ -497,31 +557,34 @@ ggplot( bsr, aes( alpha, value, col=method )) + ``` The combination of bias, standard error, and RMSE provides a rich and informative view of estimator performance. +The top row represents settings where effect size is independent of cluster size, while the bottom row reflects a correlation between size and effect. +We see how bias, SE and RMSE grow as site variation increases (moving rightward in each panel). +Notably, when effect size is related to cluster size (bottom row), both linear regression and MLM exhibit significant bias, leading to notable increase in RMSE over SE. +In contrast, when effect size is unrelated to cluster size (top row), all methods show minimal bias, and the SEs are about the same; that said, we see aggregation paying a penalty as variation cluster size increases. +Overall, we see RMSE is primarily driven by SE. -As an illustration, in the above plot, we focus on a specific simulation scenario with n_bar = 320, J = 80, and ICC = 0. The top row represents settings where effect size is independent of cluster size, while the bottom row reflects a correlation between size and effect. - -These types of visualizations directly illustrates the canonical relationship: +The Bias-SE-RMSE visualization directly illustrates the canonical relationship: $$ \text{RMSE}^2 = \text{Bias}^2 + \text{SE}^2 $$ -In the plot we get overall performance (RMSE) clearly decomposed into into its two fundamental components: systematic error (bias) and variability (standard error). +The plot shows overall performance (RMSE) decomposed into into its two fundamental components: systematic error (bias) and variability (standard error). Here we see how bias for LR, for example, is dominant when site variation is high. -The differences in SE are small and so not the main reason for differences in overall estimator performance; bias is the main driver. +The differences in SE across methods are small and are thus not the main reason for differences in overall estimator performance; bias is the main driver. This is the kind of diagnostic plot we often wish were included in more applied simulation studies. -## Assessing estimated SEs +## Assessing the quality of the estimated SEs So far we have examined the performance of our _point estimators_. We next look at ways to assess our _estimated_ standard errors. A good first question is whether they are about the right size, on average, across all the scenarios. -Here it is very important to see if they are _reliably_ the right size, so the bundling method is an especially important tool here. +When assessing estimated standard errors it is very important to see if they are _reliably_ the right size, making the bundling method an especially important tool here. We first see if the average estimated SE, relative to the true SE, is usually around 1 across all scenarios: -```{r} +```{r, out.width="100%"} sres <- sres %>% mutate( inflate = ESE_hat / SE ) @@ -529,27 +592,30 @@ ggplot( sres, aes( ICC, inflate, col=method, group = interaction(ICC,method) ) ) + facet_grid( . ~ J, labeller = label_both) + - geom_boxplot( position="dodge" ) + + geom_boxplot( position="dodge", outlier.size=0.5 ) + geom_hline( yintercept=1 ) + - labs( color="n" ) + + labs( color="n", y = "Inflation" ) + scale_y_continuous( labels = scales::percent_format() ) ``` -We see that our estimated SEs are about right, on average, across all scenarios. -When ICC is 0 and J is small, the MLM SEs are a bit too high. -When J is 5, the LR estimator can be a bit low under some circumstances. -We can start exploring these trends to dig into why our are wide (suggesting that other factors dictate when the SEs are biased). +We see that, for the most part, our estimated SEs are about right, on average, across all scenarios. +When the ICC is 0 and J is small, the MLM SEs are clearly too high. +We also see that when J is 5, the LR estimator tends to be a bit low. -We can look at the $J = 80$ to see what MCSEs are like. -The `simhelpers` `calc_relative_var()` method gives mcses for relative bias. +We next start exploring to dig into why our boxplots are wide. +In particular, we want to see if other factors dictate when the SEs are biased. +We first subset to the $J = 80$ scenarios to see if those box widths could just be due to the MCSEs. +The `simhelpers` `calc_relative_var()` method gives mcses for relative bias of an estimated _variance_ to the true _variance_. +We thus square our estimated SEs to get variance estimates, and then use that function to see if the relative variance estimates are biased: ```{r} se_res <- res %>% group_by( n_bar, J, ATE, size_coef, ICC, alpha, method ) %>% summarize( calc_relative_var( estimates = ATE_hat, - var_estimates = SE_hat^2, - criteria = "relative bias" ) ) + var_estimates = SE_hat^2, + criteria = "relative bias" ) ) + se_res %>% filter( J == 80, n_bar == 80 ) %>% ggplot( aes( ICC, rel_bias_var, col=method ) ) + @@ -563,12 +629,15 @@ se_res %>% width = 0 ) ``` -In looking at this plot, we see no real evidence of miscalibration. -This makes us think the boxes in the prior plot are wide due to MCSE rather than other simulation factors driving some slight miscalibration in some scenarios when $J$ is high. +In looking at this plot, we see no real evidence of miscalibration: our confidence intervals are generally covering 1, meaning our average estimated variance is about the same as the true variance. +This makes us think the boxes for $J=80$ in the prior plot are wide due to MCSE rather than other simulation factors driving some slight miscalibration. We might then assume this applies to the $J = 20$ case as well. -Finally, we can look at how stable the estimated SEs are, relative to the actual uncertainty. -We calculate the standard deviation of the estimated standard errors and compare that to the standard deviation of the point estimate. +### Stability of estimated SEs + +We can also look at how stable the estimated SEs are, relative to the actual uncertainty they are trying to capture. +We do this by calculating the standard deviation of the estimated standard errors and compare that to the standard deviation of the point estimate. +This is related to the coefficient of variation of `SE_hat`. ```{r} sres <- mutate( sres, @@ -580,12 +649,14 @@ ggplot( sres, facet_grid( . ~ J, labeller = label_both) + geom_boxplot( position="dodge" ) + labs( color="n" ) + - scale_y_continuous( labels = scales::percent_format() ) + scale_y_continuous( labels = scales::percent_format() ) + + scale_x_continuous( breaks = unique( sres$ICC ) ) ``` -It looks like MLM has more reliably estimated SEs than other methods when ICC is small. -Aggregation has more trouble estimating uncertainty when J is small. -Finally, LR's SEs are generally more unstable, relative to its performance, when $J$ is larger. +Overall, we have a lot of variation in the estimated SEs, relative to the actual uncertainty. +We also see that MLM has more reliably estimated SEs than other methods when ICC is small. +Aggregation has relatively more trouble estimating uncertainty when J is small. +Finally, LR's SEs are slightly more unstable, relative to the other methods, when $J$ is larger. Assessing the stability of standard errors is usually very in the weeds of a performance evaluation. It is a tricky measure: if the true SE is high for a method, then the relative instability will be lower, even if the absolute instability is the same. @@ -595,9 +666,9 @@ People often look at confidence interval coverage and confidence interval width, ## Assessing confidence intervals -Coverage is a blend of how accurate our estimates are and how good our estimated SEs are. +Coverage is a blend of how accurate (unbiased) our estimates are and how good our estimated SEs are. To assess coverage, we first calculate confidence intervals using the estimated effect, estimated standard error, and degrees of freedom. -Once we have our calculated t-based intervals, we can average them across runs to get average width and coverage using `simhelpers`. +Once we have our calculated $t$-based intervals, we can average them across runs to get average width and coverage using `simhelpers`'s `calc_coverage()` method. A good confidence interval estimator would be one which is generally relatively short while maintaining proper coverage. Our calculations are as so: @@ -626,16 +697,17 @@ c_sub <- covres %>% ggplot( c_sub, aes( ICC, coverage, col=method, group=method ) ) + facet_grid( . ~ J, labeller = label_both ) + - geom_line() + - geom_point() + + geom_line( position = position_dodge( width=0.05)) + + geom_point( position = position_dodge( width=0.05) ) + geom_errorbar( aes( ymax = coverage + 2*coverage_mcse, - ymin = coverage - 2*coverage_mcse ), width=0 ) + + ymin = coverage - 2*coverage_mcse ), width=0, + position = position_dodge( width=0.05) ) + geom_hline( yintercept = 0.95 ) ``` Generally coverage is good unless $J$ is low or ICC is 0. -Monte Carlo standard errors indicate that, in some settings, the observed coverage is reliably different from the nominal 95%, suggesting issues with estimator bias, standard error estimation, or both. -We might want to see if these results are general across other settings (see exercises). +Monte Carlo standard error based confidence intervals on our performance metrics indicate that, in some settings, the observed coverage is reliably different from the nominal 95%, suggesting issues with estimator bias, standard error estimation, or both. +We might then want to see if these results are general across the other simulation scenarios (see exercises). For confidence interval width, we can calculate the average width relative to the width of LR across all scenarios: @@ -663,6 +735,9 @@ ggplot( c_agg, aes( ICC, width_rel, col=method, group=method ) ) + Confidence interval width serves as a proxy for precision. Narrow intervals suggest more precise estimates. We see MLM has wider intervals, relative to LR, when ICC is low. When there is site variation, both Agg and MLM have shorter intervals. +This plot essentially echos our standard error findings, as expected. +There are mild differences due to differences in how the degrees of freedom are calculated, however. + @@ -694,6 +769,10 @@ In Section \@ref(using-pmap-to-run-multifactor-simulations), we generated result Write a brief explanation of how the plot is laid out and explain why you chose to construct it as you did. +### Making another plot for assessing SEs + +In the main chapter we examined how SE changes as a function of various simulation factors. +Now generate a plot to see whether and when cluster size meaningfully helps precision, and explain what you find. diff --git a/075-special-topics-on-reporting.Rmd b/075-special-topics-on-reporting.Rmd index ee54c53..029d8f8 100644 --- a/075-special-topics-on-reporting.Rmd +++ b/075-special-topics-on-reporting.Rmd @@ -34,7 +34,7 @@ sres <- ) sres -# 100 iterations per factor +# 1000 iterations per factor summary( sres$R ) ``` @@ -47,16 +47,17 @@ We then dive more deeply into what to do when you have only a few iterations per ## Using regression to analyze simulation results -In Chapter \@ref(presentation-of-results) we saw some examples of using regression and ANOVA to analyze simulation results. -We next provide some further in-depth examples that give the code for doing this sort of thing. +In Chapter \@ref(presentation-of-results) we saw some examples of using regression and ANOVA on the simulation results to summarize overall patterns across scenarios. +In this chapter we will provide some further in-depth examples along with the R code for doing this sort of thing. ### Example 1: Biserial, revisited -We first give the code that produced the final ANOVA summary table for the biserial correlation example in Chapter \@ref(presentation-of-results). -In the visualization there, we saw that several factors appeared to impact bias, but we might want to get a sense of how much. -Under modeling of that same chapter, we saw a table that partialed out the variance across several factors so we could see which simulation factors mattered most for bias. +As our first in depth example, we walk through the analysis that produces the final ANOVA summary table for the biserial correlation example in Chapter \@ref(presentation-of-results). +In the visualization there, we saw that several factors appeared to impact bias. +On the eta table presented later in that same chapter, we saw a table that decomposed the variance across several factors so we could see which simulation factors mattered most for bias. -To build that table, we first fit a regression model to see: +To build that table, we first fit a regression model, regressing bias on all the simulation factors. +We first convert each factor to a factor variable, so that R does not assume a continuous relationship. ```{r, include=FALSE} load("data/d2r results.rData") @@ -65,10 +66,10 @@ allResults <- allResults %>% mutate( n = ordered(n), - p_inv = p1, p1 = factor(p1, levels = c(2:5,8)) |> fct_relabel(\(x) paste0("p1 = 1/", x)), - fixed = factor(fixed, levels = c(TRUE,FALSE), c("Fixed percentiles","Sample percentiles")) + fixed = factor(fixed, levels = c(TRUE,FALSE), + c("Fixed percentiles","Sample percentiles")) ) r_F <- @@ -76,10 +77,10 @@ r_F <- filter(stat=="r.i" & design=="Extreme Group") %>% droplevels() %>% mutate( - fixed = fct_recode(fixed, "Pop. cut-off" = "Fixed percentiles", "Sample cut-off" = "Sample percentiles"), - bias = mean - rho, - bias.sm = mean.sm - rho, - rmse = sqrt(bias^2 + var) + fixed = fct_recode(fixed, + "Pop. cut-off" = "Fixed percentiles", + "Sample cut-off" = "Sample percentiles"), + bias = mean - rho ) ``` @@ -89,15 +90,15 @@ mod = lm( bias ~ fixed + rho + I(rho^2) + p1 + n, data = r_F) summary(mod, digits=2) ``` -The above printout gives main effects for each factor, averaged across other factors. -Because `p1` and `n` are ordered factors, the `lm()` command automatically generates linear, quadradic, cubic and fourth order contrasts for them. +The above printout gives main effects for each factor, averaged across the others. +Because `p1` and `n` are ordered factors, the `lm()` command automatically generates linear, quadratic, cubic and fourth order contrasts for them. We smooth our `rho` factor, which has many levels of a continuous measure, with a quadratic curve. We could instead use splines or some local linear regression if we were worried about model fit for a complex relationship. The main effects are summaries of trends across contexts. For example, averaged across the other contexts, the "sample cutoff" condition is around 0.004 lower than the population (the baseline condition). -We can also use ANOVA to get a sense of the major sources of variation in the simulation results (e.g., identifying which factors have negligible/minor influence on the bias of an estimator). +As shown in Chapter \@ref(presentation-of-results), we can also use ANOVA to get a sense of the major sources of variation in the simulation results (e.g., identifying which factors have negligible/minor influence on the bias of an estimator). To do this, we use `aov()` to fit an analysis of variance model: ```{r anova_example, warning=FALSE} @@ -106,8 +107,10 @@ summary(anova_table) ``` The advantage here is the multiple levels of our categorical factors get bundled together in our table of results, making a tidier display. +Note we are including interactions between our simulation factors. +The prior linear regression model was just estimating main effects of the factors, and not estimating these more complex relationships. -The table in Chapter \@ref(presentation-of-results) is a summary of this anova table, which we generate as follows: +The eta table in Chapter \@ref(presentation-of-results) is a summary of this anova table, which we generate as follows: ```{r, warning=FALSE, eval=FALSE} library(lsr) @@ -117,20 +120,19 @@ etaSquared(anova_table) %>% mutate( order = 1 + str_count(source, ":" ) ) %>% group_by( order ) %>% arrange( -eta.sq, .by_group = TRUE ) %>% - relocate( order ) %>% - knitr::kable( digits = 2 ) + relocate( order ) ``` We group the results by the order of the interaction, so that we can see the main effects first, then two-way interactions, and so on. We then sort within each group to put the high importance factors first. -The resulting variance decomposition table (see Chapter \@ref(presentation-of-results)) shows the amount of variation explained by each combination of factors. +The resulting variance decomposition table shows the amount of variation explained by each combination of factors. ### Example 2: Cluster RCT example, revisited -When we have several methods to compare, we can also use meta-regression to understand how these methods change as other simulation factors change. -We next continue our running Cluster RCT example. +When we have several methods to compare, we can use meta-regression to understand how these methods change as other simulation factors change. +We next illustrated this with our running Cluster RCT example. We first turn our simulation levels (except for ICC, which has several levels) into factors, so R does not assume that sample size, for example, should be treated as a continuous variable: @@ -151,8 +153,8 @@ M <- lm( bias ~ (n_bar + J + size_coef + ICC + alpha) * method, stargazer::stargazer(M, type = "text", single.row = TRUE ) ``` -We can quickly generate a lot of regression coefficients, making our meta-regression somewhat hard to interpret. -The above model does not even have interactions of the simulation factors, even though the plots we have seen strongly suggest interactions among the simulation factors. +With even a modestly complex simulation, we can quickly generate a lot of regression coefficients, making our meta-regression somewhat hard to interpret. +The above model does not even have interactions between the simulation factors, even though the plots we have seen strongly suggest interactions among the simulation factors exist. That said, picking out the significant coefficients is a quick way to obtain clues as to what is driving performance. For instance, several features interact with the LR method for bias. The other two methods seem less impacted. @@ -162,18 +164,19 @@ The other two methods seem less impacted. -We can simplify our model using LASSO regression, to drop coefficients that are less relevant. +We can simplify a meta regression model using LASSO regression, to drop coefficients that are less relevant. This requires some work to make our model matrix of dummy variables with all the interactions. ```{r} library(modelr) library(glmnet) -# Define formula with all three-way interactions +# Define formula form <- bias ~ ( n_bar + J + size_coef + ICC + alpha) * method # Create model matrix -X <- model.matrix(form, data = sres_f)[, -1] # drop intercept +X <- model.matrix(form, data = sres_f)[, -1] +# The [,-1] drops the intercept # Fit LASSO fit <- cv.glmnet(X, sres_f$bias, alpha = 1) @@ -242,9 +245,11 @@ coef(fit2, s = "lambda.1se") %>% ``` -#### Fitting models to each method +#### Fitting meta models to each method -We know each method responds differently to the simulation factors, so we could fit three models, one for each method, and compare them. +We know each method responds differently to the simulation factors, so we could fit three models, one for each method. +This will give us a picture of what influences each methods performance in turn. +We can then make a table comparing the coefficients to compare the methods to one another. ```{r} meth = c( "LR", "MLM", "Agg" ) @@ -275,21 +280,17 @@ m_resL <- m_res %>% pivot_longer( -term, names_to = "model", values_to = "estimate" ) %>% mutate( term = factor(term, levels = unique(term)) ) %>% - mutate( has_nbar = str_detect(term, "n_bar" ), - has_J = str_detect(term, "J"), - has_size_coef = str_detect(term, "size_coef"), - has_ICC = str_detect(term, "ICC"), - has_alpha = str_detect(term, "alpha") ) + mutate( column = ifelse( as.numeric(term) <= nlevels(term)/2, "A", "B" ) ) ggplot( m_resL, aes( x = term, y = estimate, fill = model, group = model ) ) + - facet_wrap( ~ has_nbar, scales="free_y" ) + + facet_wrap( ~ column, scales="free_y" ) + geom_bar( stat = "identity", position = "dodge" ) + coord_flip() ``` -Here we see how LR stands out, but also how MLM stands out under different simulation factor combinations. +Here we see how LR stands out, but also how MLM stands out under different simulation factor combinations (see, e.g., the interaction of ICC and alpha being 0.8). Staring at this provides some understanding of how the methods are similar, and dissimilar. For another example we turn to the standard error. @@ -319,16 +320,12 @@ m_resL <- m_res %>% pivot_longer( -term, names_to = "model", values_to = "estimate" ) %>% mutate( term = factor(term, levels = unique(term)) ) %>% - mutate( has_nbar = str_detect(term, "n_bar" ), - has_J = str_detect(term, "J"), - has_size_coef = str_detect(term, "size_coef"), - has_ICC = str_detect(term, "ICC"), - has_alpha = str_detect(term, "alpha") ) + mutate( column = ifelse( as.numeric(term) <= nlevels(term)/2, "A", "B" ) ) ggplot( m_resL, aes( x = term, y = estimate, fill = model, group = model ) ) + - facet_wrap( ~ has_nbar, scales="free_y" ) + + facet_wrap( ~ column, scales="free_y" ) + geom_bar( stat = "identity", position = "dodge" ) + geom_hline( yintercept = 0, linetype = "dashed" ) + labs( y = "Relative change in SE", @@ -338,16 +335,16 @@ ggplot( m_resL, ``` This clearly shows that the methods are basically the same in terms of uncertainty estimation. -We also see some interesting trends, such as the impact of `n_bar` declines when ICC is higher (see the interaction terms at rigth of plot). +We also see some interesting trends, such as the impact of `n_bar` declines when ICC is higher (see the interaction terms at right of plot that offset the `n_bar` main effects). ## Using regression trees to find important factors -With more complex experiments, where the various factors are interacting with each other in strange ways, it can be a bit tricky to decipher which factors are important and identify stable patterns. -Another approach we might use to explore is to fit a regression tree on the simulation results. - +With more complex experiments, where the various factors are interacting with each other in strange ways, it can be a bit tricky to decipher which factors are important and what patterns are stable. +Another exploration approach we might use is regression trees. +Here, for example, we see what predicts larger bias amounts: ```{r} source( here::here( "code/create_analysis_tree.R" ) ) set.seed(4344443) @@ -364,7 +361,7 @@ This function is a wrapper of the `rpart` package. The default pruning is based on a cross-fitting evaluation, and our sample size is not too terribly high (just the number of simulation scenarios fit). Rerunning the code with a different seed can give a different tree. In general, it might be worth forcibly simplifying the tree. -Trees are built greedily, so forcibly trimming often gives you the big things. +Trees are built greedily, so forcibly trimming often leaves you only with the big things. For example: ```{r} @@ -375,7 +372,7 @@ create_analysis_tree( sres_f, tree_title = "Smaller Cluster RCT Bias Analysis Tree" ) ``` -A very straightforward story: if `size_coef` is not 0, we are using LR, and alpha is large, then we have large bias. +This tree gives a very straightforward story: if `size_coef` is not 0, we are using LR, then alpha drives bias. We can also zero in on specific methods to understand how they engage with the simulation factors, like so: @@ -389,101 +386,189 @@ create_analysis_tree( filter( sres_f, method=="LR" ), ``` We force more leaves to get at some more nuance. -We again immediately see, for the LR method, that bias is large when we have non-zero size coefficient _and_ large alpha value. +We again immediately see, for the LR method, that bias is large when we have non-zero size coefficient _and_ a large alpha value. Then, when $J$ is small, bias is even larger. -Generally we would not use a tree like this for a final reporting of results, but they can be important tools for _understanding_ your results, which leads to how to make and select more conventional figures for final reporting. +Generally we would not use a tree like this for a final reporting of results, but they can be important tools for _understanding_ your results, which leads to how to make and select more conventional figures for an outward facing document. ## Analyzing results with few iterations per scenario -When your simulation iterations are expensive to run (i.e., when each model fitting takes several minutes), then running thousands of iterations for many scenarios may not be computationally feasible. -But running simulations with a smaller number of iterations will yield very noisy estimates of estimator performance. -For a given scenario, if the methods being evaluated are substantially different, then the main patterns in performance might become evident even with only a few iterations. More generally, however, the Monte Carlo Standard Errors (MCSEs) may be so large that you will have a hard time discriminating between systematic patterns and noise. +When each simulation iteration is expensive to run (i.e., when each model fitting takes several minutes), then running thousands of iterations for many scenarios may not be computationally feasible. +But running simulations with only a small number of iterations will yield very noisy estimates of estimator performance. +Now, if the methods being evaluated are substantially different, then differences in performance might still be evident even with only a few iterations. +More generally, however, the Monte Carlo Standard Errors (MCSEs) may be so large that you will have a hard time discriminating between systematic patterns and noise. -One tool to handle this is aggregation: if you use visualization methods that average across scenarios, those averages will have more precise estimates of (average) performance. +One tool to handle few iterations is aggregation: if you average across scenarios, those averages will have more precise estimates of (average) performance than the estimates of performance within the scenarios. Do not, by contrast, trust the bundling approaches--the MCSEs will make your boxes wider, and give the impression that there is more variation across scenarios than there really is. -Regression approaches can be particularly useful: the regressions will effectively average performance across scenario, and give summaries of overall trends. + +Regression approaches can be particularly useful: a regression will effectively average performance across scenario, and give summaries of overall trends. You can even fit random effects regression, specifically accounting for the noise in the scenario-specific performance measures. For more on this approach see @gilbert2024multilevel. ### Example: ClusterRCT with only 100 replicates per scenario -In the prior chapter we analyzed the results of our cluster RCT simulation with 1000 replicates per scenario. +```{r, include=FALSE} +# Make small dataset +res_small <- res %>% + mutate( runID = as.numeric( runID ) ) %>% + filter( runID <= 100 ) + +ssres <- res_small %>% + group_by( n_bar, J, ATE, size_coef, ICC, alpha, method ) %>% + summarise( + bias = mean(ATE_hat - ATE), + SE = sd( ATE_hat ), + RMSE = sqrt( mean( (ATE_hat - ATE )^2 ) ), + ESE_hat = sqrt( mean( SE_hat^2 ) ), + SD_SE_hat = sqrt( sd( SE_hat^2 ) ), + power = mean( p_value <= 0.05 ), + R = n(), + .groups = "drop" + ) +ssres + +# Now 100 iterations per factor +summary( ssres$R ) + +``` + +In the prior chapter we analyzed the results of our cluster RCT simulation with 1000 iterations per scenario. But say we only had 100 per scenario. -Using the prior chapter as a guide, we recreate some of the plots to show how MCSE can distort the picture of what is going on. +Using the prior chapter as a guide, we next recreate some of the plots to show how MCSE can distort the picture of what is going on. First, we look at our single plot of the raw results. Before we plot, however, we calculate MCSEs and add them to the plot as error bars. ```{r} sres_sub <- - sres %>% + ssres %>% filter( n_bar == 320, J == 20 ) %>% mutate( bias.mcse = SE / sqrt( R ) ) +dodge <- position_dodge(width = 0.35) ggplot( sres_sub, aes( as.factor(alpha), bias, col=method, pch=method, group=method ) ) + facet_grid( size_coef ~ ICC, labeller = label_both ) + - geom_point() + + geom_point( position = dodge ) + geom_errorbar( aes( ymin = bias - 2*bias.mcse, ymax = bias + 2*bias.mcse ), - width = 0 ) + - geom_line() + + width = 0, + position = dodge ) + + geom_line( position = dodge ) + geom_hline( yintercept = 0 ) + - theme_minimal() + theme_minimal() + + coord_cartesian( ylim = c(-0.10,0.10) ) ``` -Aggregation should smooth out some of our uncertainty. -When we aggregate across 9 scenarios, our number of replicates goes from 100 to 900; our MCSEs should be about a third the size. -Here is our aggregated bias plot: +Our uncertainty is much less when ICC is 0; this is because our estimators are far more precise due to not having cluster variation to contend with. +We also see substantial amounts of uncertainty, making it very hard to tell the different estimatord apart. +In the top row, second plot from left, we see that the three estimators are co-dependent: they all react similarly to the same datasets, so if we end up with datasets that randomly lead to large estimates, all three will give large estimates. +The shape we are seeing is not a systematic bias, but rather a shared random variation. -```{r} -sres_sub2 <- +Here is the same plot with the full 1000 replicates: + +```{r, echo=FALSE} +sres_sub_full <- sres %>% + filter( n_bar == 320, J == 20 ) %>% + mutate( bias.mcse = SE / sqrt( R ) ) + +dodge <- position_dodge(width = 0.35) +ggplot( sres_sub_full, aes( as.factor(alpha), bias, + col=method, pch=method, group=method ) ) + + facet_grid( size_coef ~ ICC, labeller = label_both ) + + geom_point( position = dodge ) + + geom_errorbar( aes( ymin = bias - 2*bias.mcse, + ymax = bias + 2*bias.mcse ), + width = 0, + position = dodge ) + + geom_line( position = dodge ) + + geom_hline( yintercept = 0 ) + + theme_minimal() + + coord_cartesian( ylim = c(-0.10,0.10) ) +``` + +The MCSEs have shrunk by around $1/\sqrt{10} = 0.32$, as we would expect (generally the MCSEs will be on the order of $1/\sqrt{R}$, where $R$ is the number of replicates, so to halve the MCSE you need to quadruple the number of replicates). +Also note the top-left pattern has shifted to a flat, slightly elevated line: we don't know if the elevation is real, just as we don't know if the dip in the prior plot was real. +Our confidence intervals are still including 0: it is possible there is no bias at all when the size coefficient is 0 (in fact we are fairly sure it is indeed the case). + +```{r, include=FALSE} +summary( sres_sub_full$bias.mcse / sres_sub$bias.mcse ) +``` + +Moving back to our "small replicates" simulation, we can use aggregation to smooth out some of our uncertainty. +For example, if we aggregate across 9 scenarios, our number of replicates goes from 100 to 900; our MCSEs should then be about a third the size. +To calculate an aggregated MCSE, we aggregate our scenario-specific MCSEs as follows: +$$ MCSE_{agg} = \sqrt{ \frac{1}{K^2} \sum_{k=1}^{K} MCSE_k^2 } $$ + +where $MCSE_i$ is the Monte Carlo Standard Error for scenario $i$, and $k$ is the number of scenarios. +Assuming a collection of estimates are independent, the overall $SE^2$ of an average is the average $SE^2$ divided by $K$. +In code we have: + +```{r, echo=FALSE} +sres_sub2 <- + ssres %>% mutate( bias.mcse = SE / sqrt( R ) ) %>% - group_by( n_bar, J ) %>% + group_by( method, alpha, size_coef, ICC ) %>% summarise( bias = mean( bias ), bias.mcse = sqrt( mean( bias.mcse^2 )) / sqrt(n()), + n = n(), .groups = "drop" ) +``` +Note that the `SE` variable is simply the standard deviation of the estimates. -ggplot( sres_sub, aes( as.factor(alpha), bias, +Here is our aggregated bias plot, aggregating across `n_bar` and `J`: + +```{r, echo=FALSE} +ggplot( sres_sub2, aes( as.factor(alpha), bias, col=method, pch=method, group=method ) ) + facet_grid( size_coef ~ ICC, labeller = label_both ) + - geom_point() + + geom_point( position = dodge ) + geom_errorbar( aes( ymin = bias - 2*bias.mcse, ymax = bias + 2*bias.mcse ), - width = 0 ) + - geom_line() + + width = 0, + position = dodge ) + + geom_line( position = dodge ) + geom_hline( yintercept = 0 ) + - theme_minimal() + theme_minimal() + + coord_cartesian( ylim = c(-0.10,0.10) ) ``` -To get aggregate MCSE, we aggregate our scenario-specific MCSEs as follows: -$$ MCSE_{agg} = \sqrt{ \frac{1}{K^2} \sum_{k=1}^{K} MCSE_k^2 } $$ +```{r, include=FALSE} +sres_sub2 +sres_sub +ss = left_join( sres_sub2, + sres_sub, + by = c("method", "alpha", "size_coef", "ICC") ) +# Off because we are comparing to only one scenario with very specific MCSE, but we need to look at average MCSE across the averaged scenarios. +summary( ss$bias.mcse.x / ss$bias.mcse.y ) + +``` -where $MCSE_i$ is the Monte Carlo Standard Error for scenario $i$, and $k$ is the number of scenarios. -Assuming a collection of estimates are independent, the overall $SE^2$ of the average is the average $SE^2$ divided by $K$. -Even with the additional replicates per point, we see noticable noise in our plot. -Note how our three methods track each other up and down in the zero-bias scenarios, giving a sense of a shared bias in some cases. +Even with the additional replicates per point, we see noticeable noise in our plot: look at the top-right ICC of 0.8 facet, for example. +Also note how our three methods track each other up and down in top row, giving a sense of a shared error. This is because all methods are analyzing the same set of datasets; they have shared uncertainty. This uncertainty can be deceptive. It can also be a boon: if we are explicitly comparing the performance of one method vs another, the shared uncertainty can be subtracted out, similar to what happens in a blocked experiment [@gilbert2024multilevel]. -Here we fit a multilevel model to the data. +One way to take advantage of this is to fit a multilevel regression model to our raw simulation results with a random effect for dataset. +Here we fit such a model, taking advantage of the fact that bias is simply the average of the error across replicates. +We first make a unique ID for each scenario and dataset, and then fit the model with a random effect for both. +The first random effect allows for specific scenarios to have more or less bias beyond what our model predicts. +The second random effect allows for a given dataset to have a larger or smaller error than expected, shared across the three estimators. ```{r} library(lme4) -sub_res <- - res %>% - filter( runID <= 100 ) %>% +res_small <- res_small %>% mutate( error = ATE_hat - ATE, simID = paste(n_bar, J, size_coef, ICC, alpha, sep = "_"), + dataID = paste( simID, runID, sep="_" ), J = as.factor(J), n_bar = as.factor(n_bar), alpha = as.factor(alpha), @@ -491,13 +576,13 @@ sub_res <- ) M <- lmer( - error ~ method*(J + n_bar + ICC + alpha + size_coef) + (1|runID) + (1|simID), - data = sub_res + error ~ method + (1|dataID) + (1|simID), + data = res_small ) arm::display(M) ``` -We can look at the random effects: +We can look at how much each source of variation explains the overall error: ```{r} ranef_vars <- as.data.frame(VarCorr(M)) %>% @@ -508,8 +593,42 @@ ranef_vars <- knitr::kable(ranef_vars, digits = 2) ``` -The above model is a multilevel model that allows us to estimate how bias varies with method and simulation factor, while accounting for the uncertainty in the simulation. -The random variation for `simID` captures unexplained variation due to the interactions of the simulation factors. We see a large value, indicating that many interactions are present, and our main effects are not fully capturing all trends. +The random variation for `simID` captures unexplained variation due to the interactions of the simulation factors. +It appears to be a trivial amount; almost all the variation is due to the dataset. +This makes sense: each datasets is unbalanced due to random assignment, and that estimation error is part of the dataset random effect. + +So far we haven't included any simulation factors: we are pushing variation across simulation into the random effect terms. We can instead include the simulation factors as fixed effects, to see how they impact bias. + +```{r} +M2 <- lmer( + error ~ method*(J + n_bar + ICC + alpha + size_coef) + (1|dataID) + (1|simID), + data = res_small +) +texreg::screenreg(M2) +``` + +The above models allow us to estimate how bias varies with method and simulation factor, while accounting for the uncertainty in the simulation. + +Finally, we can see how much variation has been explained by comparing the random effect variances: +```{r} +ranef_vars1 <- + as.data.frame(VarCorr(M)) %>% + dplyr::select(grp = grp, sd = vcov) %>% + mutate( sd = sqrt(sd), + ICC = sd^2 / sum(sd^2 ) ) +ranef_vars2 <- + as.data.frame(VarCorr(M2)) %>% + dplyr::select(grp = grp, sd = vcov) %>% + mutate( sd = sqrt(sd), + ICC = sd^2 / sum(sd^2 ) ) +rr = left_join( ranef_vars1, ranef_vars2, by = "grp", + suffix = c(".null", ".full") ) +rr <- rr %>% + mutate( sd.red = sd.full / sd.null ) +knitr::kable(rr, digits = 2) +``` + + From 03ae107d3d460521845f5b815261afba63d94dad Mon Sep 17 00:00:00 2001 From: lmiratrix Date: Thu, 6 Nov 2025 10:55:52 -0500 Subject: [PATCH 08/10] final cleanup from merge from main --- 075-special-topics-on-reporting.Rmd | 238 +++++++++--------- .../latex/__packages | 48 ---- 2 files changed, 118 insertions(+), 168 deletions(-) delete mode 100644 Designing-Simulations-in-R_cache/latex/__packages diff --git a/075-special-topics-on-reporting.Rmd b/075-special-topics-on-reporting.Rmd index 029d8f8..5d44ca2 100644 --- a/075-special-topics-on-reporting.Rmd +++ b/075-special-topics-on-reporting.Rmd @@ -8,10 +8,11 @@ editor_options: ```{r setup_exp_design_analysis, include=FALSE} library( tidyverse ) library( purrr ) +library( broom ) options(list(dplyr.summarise.inform = FALSE)) theme_set( theme_classic() ) - +source( "code/create_analysis_tree.R" ) ### Code for one of the running examples source( "case_study_code/clustered_data_simulation.R" ) @@ -47,7 +48,7 @@ We then dive more deeply into what to do when you have only a few iterations per ## Using regression to analyze simulation results -In Chapter \@ref(presentation-of-results) we saw some examples of using regression and ANOVA on the simulation results to summarize overall patterns across scenarios. +In Chapter \@ref(presentation-of-results) we saw some examples of using regression and ANOVA on a set of simulation results to summarize overall patterns across scenarios. In this chapter we will provide some further in-depth examples along with the R code for doing this sort of thing. ### Example 1: Biserial, revisited @@ -103,7 +104,8 @@ To do this, we use `aov()` to fit an analysis of variance model: ```{r anova_example, warning=FALSE} anova_table <- aov(bias ~ rho * p1 * fixed * n, data = r_F) -summary(anova_table) +knitr::kable( summary(anova_table)[[1]], + digits = c(0,4,4,1,5) ) ``` The advantage here is the multiple levels of our categorical factors get bundled together in our table of results, making a tidier display. @@ -132,12 +134,11 @@ The resulting variance decomposition table shows the amount of variation explain ### Example 2: Cluster RCT example, revisited When we have several methods to compare, we can use meta-regression to understand how these methods change as other simulation factors change. -We next illustrated this with our running Cluster RCT example. +We next illustrate this with our running Cluster RCT example. We first turn our simulation levels (except for ICC, which has several levels) into factors, so R does not assume that sample size, for example, should be treated as a continuous variable: ```{r} - sres_f <- sres %>% mutate( @@ -150,11 +151,12 @@ M <- lm( bias ~ (n_bar + J + size_coef + ICC + alpha) * method, data = sres_f ) # View the results -stargazer::stargazer(M, type = "text", single.row = TRUE ) +tidy( M ) %>% + knitr::kable( digits = 3 ) ``` With even a modestly complex simulation, we can quickly generate a lot of regression coefficients, making our meta-regression somewhat hard to interpret. -The above model does not even have interactions between the simulation factors, even though the plots we have seen strongly suggest interactions among the simulation factors exist. +The above model does not even have interactions between the simulation factors, even though the plots we have seen strongly suggest interactions among them. That said, picking out the significant coefficients is a quick way to obtain clues as to what is driving performance. For instance, several features interact with the LR method for bias. The other two methods seem less impacted. @@ -166,97 +168,69 @@ The other two methods seem less impacted. We can simplify a meta regression model using LASSO regression, to drop coefficients that are less relevant. This requires some work to make our model matrix of dummy variables with all the interactions. +If using LASSO, we recommend fitting a separate model to each method being considered; +the set of fit LASSO models can then be compared to see which methods react to what factors, and how. + +We first illustrate with LR, and then extend to all three. +To use the LASSO we have to prepare our data first by hand---this involves converting all our factors to sets of dummy variables for the regression. +We also generate all interaction terms up to the cubic level. ```{r} library(modelr) library(glmnet) -# Define formula -form <- bias ~ ( n_bar + J + size_coef + ICC + alpha) * method +sres_f_LR <- sres_f %>% + filter( method == "LR" ) # Create model matrix -X <- model.matrix(form, data = sres_f)[, -1] +form <- bias ~ ( n_bar + J + size_coef + ICC + alpha )^3 +X <- model.matrix(form, data = sres_f_LR)[, -1] # The [,-1] drops the intercept +dim(X) # Fit LASSO -fit <- cv.glmnet(X, sres_f$bias, alpha = 1) +fit <- cv.glmnet(X, sres_f_LR$bias, alpha = 1) -# Coefficients +# Non-zero coefficients coef(fit, s = "lambda.1se") %>% as.matrix() %>% as.data.frame() %>% rownames_to_column("term") %>% - filter(abs(lambda.1se) > 0) %>% - knitr::kable(digits = 3) -``` - -When using regression, and especially LASSO, which levels are baseline can impact the final results. -Here "Agg" is our baseline method, and so our coefficients are showing how other methods differ from the Agg method. -If we selected LR as baseline, then we might suddenly see Agg and MLM as having large coefficients. - -One trick is to give dummy variables for all the methods, and overload the `method` factor with the baseline method, so that it is always the first level. -```{r} -form <- bias ~ 0 + ( n_bar + J + size_coef + ICC + alpha) * method -sres_f$method <- factor(sres_f$method) -vars = c("n_bar", "J", "size_coef", "alpha", "method") -contr.identity <- function(x) { - n = nlevels(x) - m <- diag(n) - rownames(m) <- colnames(m) <- levels(x) - - m -} -contr.identity(sres_f$n_bar) -X <- model.matrix(~ 0 + ( n_bar + J + size_coef + alpha) * method, - data = sres_f, - contrasts.arg = lapply(sres_f[,vars], - \(x) contr.identity(x))) - -colnames(X) -``` - -Now do the LASSO on this colinear mess: -```{r} -fit <- cv.glmnet(X, sres_f$bias, alpha = 1) -coef(fit, s = "lambda.1se") %>% - as.matrix() %>% - as.data.frame() %>% - rownames_to_column("term") %>% - filter(abs(lambda.1se) > 0) %>% - knitr::kable(digits = 3) -``` - - -We can also extend to allow for pairwise interactions of simulation factors: -```{r} -form2 <- bias ~ ( n_bar + J + size_coef + ICC + alpha)^2 * method -``` - -Interestingly, we get basically the same result: -```{r, echo=FALSE} -X2 <- model.matrix(form2, data = sres_f)[, -1] # drop intercept -fit2 <- cv.glmnet(X2, sres_f$bias, alpha = 1) -coef(fit2, s = "lambda.1se") %>% - as.matrix() %>% - as.data.frame() %>% - rownames_to_column("term") %>% - filter(abs(lambda.1se) > 0) %>% + filter(abs(s0) > 0) %>% knitr::kable(digits = 3) ``` +Note we have 71 covariates due to the many, many interactions and the fact that our sample sizes, etc., are all factors, not continuous. -#### Fitting meta models to each method +When using regression, and especially LASSO, which levels are baseline can impact the final results. +We have our smallest sample sizes, no variation, 0 ICC, and no `size_coef` as baseline. +We might imagine that other choices of baseline could suddenly make other factors appear with large coefficients. +One trick to avoid selecting a baseline is to give dummy variables for all the factors, and fit LASSO with the colinear terms. +Due to regularization, this would still work; we do not pursue this here, however. -We know each method responds differently to the simulation factors, so we could fit three models, one for each method. -This will give us a picture of what influences each methods performance in turn. -We can then make a table comparing the coefficients to compare the methods to one another. +We next bundle the above to make three models, one for each method. +We first rescale ICC to be on a 5 point scale to control it's relative coefficient size to the dummy variables, and then add a new feature of "zeroICC" as well (recalling the prior plots that showed ICC being 0 was unusual). ```{r} meth = c( "LR", "MLM", "Agg" ) +sres_f$zeroICC = ifelse( sres_f$ICC == 0, 1, 0 ) +sres_f$ICCsc = sres_f$ICC * 5 # rescale ICC to be on a 5 point scale + models <- map( meth, function(m) { - M <- lm( bias ~ (n_bar + J + size_coef + ICC + alpha)^2, - data = sres_f %>% filter( method == m ) ) - tidy( M ) + + sres_f_LR <- sres_f %>% + filter( method == m ) + + form <- bias ~ ( n_bar + J + size_coef + ICCsc + alpha + zeroICC )^3 + X <- model.matrix(form, data = sres_f_LR)[, -1] + fit <- cv.glmnet(X, sres_f_LR$bias, alpha = 1) + + coef(fit, s = "lambda.min") %>% + as.matrix() %>% + as.data.frame() %>% + rownames_to_column("term") %>% + rename( estimate = s0 ) %>% + filter(abs(estimate) > 0) } ) models <- @@ -266,66 +240,80 @@ models <- m_res <- models %>% dplyr::select( model, term, estimate ) %>% - pivot_wider( names_from="model", values_from="estimate" ) + pivot_wider( names_from="model", values_from="estimate" ) %>% + mutate(order = str_count(term, ":")) %>% + arrange(order) %>% + relocate(order) +options(knitr.kable.NA = '') m_res %>% - knitr::kable( digits = 2 ) + knitr::kable( digits = 3 ) %>% + print( na.print = "" ) ``` -Of course, this is table is hard to read. Better to instead plot the coefficients or use LASSO to simplify the model specification. +Of course, this is table is hard to read. Better to instead plot the coefficients: ```{r} +lvl = m_res$term m_resL <- m_res %>% - pivot_longer( -term, + pivot_longer( -c( order, term ), names_to = "model", values_to = "estimate" ) %>% - mutate( term = factor(term, levels = unique(term)) ) %>% - mutate( column = ifelse( as.numeric(term) <= nlevels(term)/2, "A", "B" ) ) + mutate( term = factor(term, levels = rev(lvl) ) ) ggplot( m_resL, aes( x = term, y = estimate, fill = model, group = model ) ) + - facet_wrap( ~ column, scales="free_y" ) + + facet_wrap( ~ model ) + geom_bar( stat = "identity", position = "dodge" ) + + geom_hline(yintercept = 0 ) + coord_flip() ``` -Here we see how LR stands out, but also how MLM stands out under different simulation factor combinations (see, e.g., the interaction of ICC and alpha being 0.8). -Staring at this provides some understanding of how the methods are similar, and dissimilar. +Here we see how LR stands out, but also how MLM stands out under different simulation factor combinations (see, e.g., the interaction of zeroICC, alpha being 0.8, and size_coef being 0.2). +This aggregate plot provides some understanding of how the methods are similar, and dissimilar. For another example we turn to the standard error. -Here we regress $log(SE)$ onto the coefficients, and we rescale ICC to be on a 5 point scale to control it's relative coefficeint size to the dummy variables. -We regress $log(SE)$ and then exponentiate the coefficients to get the relative change in SE. -We can then interpret an exponentiated coefficient of, 0.64 for MLM for `n_bar80` as a 36% reduction of the standard error when we increase n_bar from the baseline of 20 to 80. +Here we regress $log(SE)$ onto the coefficients. +We then exponentiate the estimated coefficients to get the relative change in SE as a function of the factors. +We can interpret an exponentiated coefficient of, for example, 0.64 for MLM for `n_bar80` as a 36% reduction of the standard error when we increase n_bar from the baseline of 20 to 80. +We use ordinary least squares and include all interactions up to three way interactions. +We will then simply drop all the tiny coefficients, rather than use the full LASSO machinery, to simplify our output. +This results in a plot similar to the above: -Here we make a plot like above, but with these relative changes: ```{r, echo=FALSE} meth = c( "LR", "MLM", "Agg" ) -sres_f$ICCsc = sres_f$ICC * 5 # rescale ICC to be on a 5 point scale models <- map( meth, function(m) { - M <- lm( log(SE) ~ (n_bar + J + size_coef + ICCsc + alpha)^2, + M <- lm( log(SE) ~ (n_bar + J + size_coef + ICCsc + alpha)^3, data = sres_f %>% filter( method == m ) ) tidy( M ) %>% - mutate( estimate =exp(estimate) - 1 ) + mutate( estimate = exp(estimate) - 1 ) } ) models <- models %>% set_names(meth) %>% bind_rows( .id = "model" ) m_res <- models %>% - dplyr::select( model, term, estimate ) %>% + mutate(order = str_count(term, ":")) %>% + dplyr::select( order, term, model, estimate ) %>% pivot_wider( names_from="model", values_from="estimate" ) m_resL <- m_res %>% - pivot_longer( -term, + pivot_longer( -c( order, term ), names_to = "model", values_to = "estimate" ) %>% mutate( term = factor(term, levels = unique(term)) ) %>% - mutate( column = ifelse( as.numeric(term) <= nlevels(term)/2, "A", "B" ) ) + group_by( term ) %>% + mutate( max_est = max( abs( log( estimate+1 ) ) ) ) %>% + ungroup() %>% + filter( max_est > 0.05, + term != "(Intercept)" ) + +m_resL <- m_resL %>% + mutate( term = factor( term, rev( levels(term) ) ) ) ggplot( m_resL, aes( x = term, y = estimate, fill = model, group = model ) ) + - facet_wrap( ~ column, scales="free_y" ) + geom_bar( stat = "identity", position = "dodge" ) + geom_hline( yintercept = 0, linetype = "dashed" ) + labs( y = "Relative change in SE", @@ -334,9 +322,8 @@ ggplot( m_resL, coord_flip() ``` -This clearly shows that the methods are basically the same in terms of uncertainty estimation. -We also see some interesting trends, such as the impact of `n_bar` declines when ICC is higher (see the interaction terms at right of plot that offset the `n_bar` main effects). - +Our plot clearly shows that the three methods are basically the same in terms of uncertainty estimation, with a few differences when alpha is 0.8. +We also see some interesting trends, such as the impact of n_bar declines when ICC is higher (see the positive interaction terms at right of plot). ## Using regression trees to find important factors @@ -344,10 +331,13 @@ We also see some interesting trends, such as the impact of `n_bar` declines when With more complex experiments, where the various factors are interacting with each other in strange ways, it can be a bit tricky to decipher which factors are important and what patterns are stable. Another exploration approach we might use is regression trees. +We wrote a utility method, a wrapper to the `rpart` package, to do this ([script here](code/create_analysis_tree.R)). Here, for example, we see what predicts larger bias amounts: + ```{r} source( here::here( "code/create_analysis_tree.R" ) ) -set.seed(4344443) + +set.seed(12411) create_analysis_tree( sres_f, outcome = "bias", predictor_vars = c("method", "n_bar", "J", @@ -355,12 +345,9 @@ create_analysis_tree( sres_f, tree_title = "Cluster RCT Bias Analysis Tree" ) ``` -We will not walk through the tree code, but you can review it [here](code/create_analysis_tree.R). -This function is a wrapper of the `rpart` package. - -The default pruning is based on a cross-fitting evaluation, and our sample size is not too terribly high (just the number of simulation scenarios fit). -Rerunning the code with a different seed can give a different tree. -In general, it might be worth forcibly simplifying the tree. +The default pruning is based on a cross-fitting evaluation, but our sample size is not too terribly high (just the number of simulation scenarios fit) so this is quite unstable. +Rerunning the code with a different seed will generally give a different tree. +We find that it is often worth forcibly simplifying the tree. Trees are built greedily, so forcibly trimming often leaves you only with the big things. For example: @@ -369,10 +356,11 @@ create_analysis_tree( sres_f, outcome = "bias", predictor_vars = c("method", "n_bar", "J", "size_coef", "ICC", "alpha"), - tree_title = "Smaller Cluster RCT Bias Analysis Tree" ) + tree_title = "Smaller Cluster RCT Bias Analysis Tree", + min_leaves = 5, max_leaves = 10 ) ``` -This tree gives a very straightforward story: if `size_coef` is not 0, we are using LR, then alpha drives bias. +This tree gives a very straightforward story: if `size_coef` is not 0 and we are using LR, then alpha drives bias. We can also zero in on specific methods to understand how they engage with the simulation factors, like so: @@ -394,21 +382,25 @@ Generally we would not use a tree like this for a final reporting of results, bu ## Analyzing results with few iterations per scenario -When each simulation iteration is expensive to run (i.e., when each model fitting takes several minutes), then running thousands of iterations for many scenarios may not be computationally feasible. -But running simulations with only a small number of iterations will yield very noisy estimates of estimator performance. +When each simulation iteration is expensive to run (e.g., if fitting your model takes several minutes), then running thousands of iterations for many scenarios may not be computationally feasible. +But running simulations with only a small number of iterations will yield very noisy estimates of estimator performance for that scenario. + Now, if the methods being evaluated are substantially different, then differences in performance might still be evident even with only a few iterations. More generally, however, the Monte Carlo Standard Errors (MCSEs) may be so large that you will have a hard time discriminating between systematic patterns and noise. One tool to handle few iterations is aggregation: if you average across scenarios, those averages will have more precise estimates of (average) performance than the estimates of performance within the scenarios. -Do not, by contrast, trust the bundling approaches--the MCSEs will make your boxes wider, and give the impression that there is more variation across scenarios than there really is. +Do not, by contrast, trust the bundling approach--the MCSEs will make your boxes wider, and give the impression that there is more variation across scenarios than there really is. -Regression approaches can be particularly useful: a regression will effectively average performance across scenario, and give summaries of overall trends. +Meta regression approaches such as we saw above can be particularly useful: a regression will effectively average performance across scenario, and give summaries of overall trends. You can even fit random effects regression, specifically accounting for the noise in the scenario-specific performance measures. -For more on this approach see @gilbert2024multilevel. +For more on using random effects for your meta regression see @gilbert2024multilevel. ### Example: ClusterRCT with only 100 replicates per scenario + ```{r, include=FALSE} +set.seed( 40440 ) + # Make small dataset res_small <- res %>% mutate( runID = as.numeric( runID ) ) %>% @@ -428,13 +420,11 @@ ssres <- res_small %>% ) ssres -# Now 100 iterations per factor summary( ssres$R ) - ``` In the prior chapter we analyzed the results of our cluster RCT simulation with 1000 iterations per scenario. -But say we only had 100 per scenario. +But say we only had 25 per scenario. Using the prior chapter as a guide, we next recreate some of the plots to show how MCSE can distort the picture of what is going on. First, we look at our single plot of the raw results. @@ -462,11 +452,11 @@ ggplot( sres_sub, aes( as.factor(alpha), bias, ``` Our uncertainty is much less when ICC is 0; this is because our estimators are far more precise due to not having cluster variation to contend with. -We also see substantial amounts of uncertainty, making it very hard to tell the different estimatord apart. +Other than the ICC = 0 case, we see substantial amounts of uncertainty, making it very hard to tell the different estimators apart. In the top row, second plot from left, we see that the three estimators are co-dependent: they all react similarly to the same datasets, so if we end up with datasets that randomly lead to large estimates, all three will give large estimates. The shape we are seeing is not a systematic bias, but rather a shared random variation. -Here is the same plot with the full 1000 replicates: +Here is the same plot with the full 1000 replicates, with the 100 replicate results overlaid in light color for comparison: ```{r, echo=FALSE} sres_sub_full <- @@ -484,16 +474,24 @@ ggplot( sres_sub_full, aes( as.factor(alpha), bias, width = 0, position = dodge ) + geom_line( position = dodge ) + + geom_point( data=sres_sub, position = dodge, alpha=0.2 ) + + # geom_errorbar( data=sres_sub, aes( ymin = bias - 2*bias.mcse, +# ymax = bias + 2*bias.mcse ), + # alpha=0.25, + # width = 0, + # position = dodge ) + + geom_line( data=sres_sub, position = dodge, alpha=0.2 ) + geom_hline( yintercept = 0 ) + theme_minimal() + coord_cartesian( ylim = c(-0.10,0.10) ) ``` The MCSEs have shrunk by around $1/\sqrt{10} = 0.32$, as we would expect (generally the MCSEs will be on the order of $1/\sqrt{R}$, where $R$ is the number of replicates, so to halve the MCSE you need to quadruple the number of replicates). -Also note the top-left pattern has shifted to a flat, slightly elevated line: we don't know if the elevation is real, just as we don't know if the dip in the prior plot was real. +Also note the ICC=0.2 top facet has shifted to a flat, slightly elevated line: we do not yet know if the elevation is real, just as we did not know if the dip in the prior plot was real. Our confidence intervals are still including 0: it is possible there is no bias at all when the size coefficient is 0 (in fact we are fairly sure it is indeed the case). ```{r, include=FALSE} +# Checking MCSEs are smaller as expected summary( sres_sub_full$bias.mcse / sres_sub$bias.mcse ) ``` @@ -502,7 +500,7 @@ For example, if we aggregate across 9 scenarios, our number of replicates goes f To calculate an aggregated MCSE, we aggregate our scenario-specific MCSEs as follows: $$ MCSE_{agg} = \sqrt{ \frac{1}{K^2} \sum_{k=1}^{K} MCSE_k^2 } $$ -where $MCSE_i$ is the Monte Carlo Standard Error for scenario $i$, and $k$ is the number of scenarios. +where $MCSE_k$ is the Monte Carlo Standard Error for scenario $k$, and $K$ is the number of scenarios being averaged. Assuming a collection of estimates are independent, the overall $SE^2$ of an average is the average $SE^2$ divided by $K$. In code we have: @@ -513,8 +511,8 @@ sres_sub2 <- group_by( method, alpha, size_coef, ICC ) %>% summarise( bias = mean( bias ), - bias.mcse = sqrt( mean( bias.mcse^2 )) / sqrt(n()), - n = n(), + bias.mcse = sqrt( mean( bias.mcse^2 ) ) / n(), + K = n(), .groups = "drop" ) ``` diff --git a/Designing-Simulations-in-R_cache/latex/__packages b/Designing-Simulations-in-R_cache/latex/__packages deleted file mode 100644 index 2d63cdf..0000000 --- a/Designing-Simulations-in-R_cache/latex/__packages +++ /dev/null @@ -1,48 +0,0 @@ -tidyverse -ggplot2 -tibble -tidyr -readr -purrr -dplyr -stringr -forcats -lubridate -simhelpers -psych -mvtnorm -Matrix -lme4 -MASS -arm -lmerTest -estimatr -blkvar -microbenchmark -future -furrr -lsr -bookdown -knitr -rmarkdown -kableExtra -ggridges -metadat -numDeriv -metafor -carData -car -zoo -lmtest -sandwich -survival -AER -modelr -glmnet -rpart -rpart.plot -sn -testthat -mlmpower -tictoc -bench From 0bda7b82485be150adfb0f8bddbf08e8f90012d7 Mon Sep 17 00:00:00 2001 From: lmiratrix Date: Fri, 7 Nov 2025 11:54:42 -0500 Subject: [PATCH 09/10] entered edits and updated book and renv --- 075-special-topics-on-reporting.Rmd | 14 +- Designing-Simulations-in-R.toc | 564 ++++----- .../clusterRCT_plot_bias_v1-1.pdf | Bin 7984 -> 8824 bytes .../clusterRCT_plot_bias_v2-1.pdf | Bin 8546 -> 11070 bytes .../figure-latex/disc_mde-1.pdf | Bin 10563 -> 10685 bytes .../figure-latex/disc_power-1.pdf | Bin 19481 -> 19622 bytes .../figure-latex/disc_precision-1.pdf | Bin 10111 -> 10215 bytes .../figure-latex/swan_example_setup-1.pdf | Bin 18905 -> 18983 bytes .../figure-latex/ttest_result_figure-1.pdf | Bin 5921 -> 6088 bytes .../figure-latex/unnamed-chunk-2-1.pdf | Bin 5336 -> 5500 bytes index.Rmd | 14 +- packages.bib | 255 +++- renv.lock | 1071 +++++++---------- 13 files changed, 962 insertions(+), 956 deletions(-) diff --git a/075-special-topics-on-reporting.Rmd b/075-special-topics-on-reporting.Rmd index 5d44ca2..adfc1a6 100644 --- a/075-special-topics-on-reporting.Rmd +++ b/075-special-topics-on-reporting.Rmd @@ -196,7 +196,7 @@ coef(fit, s = "lambda.1se") %>% as.matrix() %>% as.data.frame() %>% rownames_to_column("term") %>% - filter(abs(s0) > 0) %>% + filter(abs(lambda.1se) > 0) %>% knitr::kable(digits = 3) ``` @@ -229,7 +229,7 @@ models <- map( meth, function(m) { as.matrix() %>% as.data.frame() %>% rownames_to_column("term") %>% - rename( estimate = s0 ) %>% + rename( estimate = lambda.min ) %>% filter(abs(estimate) > 0) } ) @@ -516,9 +516,9 @@ sres_sub2 <- .groups = "drop" ) ``` -Note that the `SE` variable is simply the standard deviation of the estimates. +Recall that the `SE` variable is simply the standard deviation of the estimates. -Here is our aggregated bias plot, aggregating across `n_bar` and `J`: +We can then make our aggregated bias plot, aggregating across `n_bar` and `J`: ```{r, echo=FALSE} ggplot( sres_sub2, aes( as.factor(alpha), bias, @@ -549,13 +549,13 @@ summary( ss$bias.mcse.x / ss$bias.mcse.y ) Even with the additional replicates per point, we see noticeable noise in our plot: look at the top-right ICC of 0.8 facet, for example. -Also note how our three methods track each other up and down in top row, giving a sense of a shared error. +Also note how our three methods continue to track each other up and down in top row, giving a sense of a shared error. This is because all methods are analyzing the same set of datasets; they have shared uncertainty. This uncertainty can be deceptive. It can also be a boon: if we are explicitly comparing the performance of one method vs another, the shared uncertainty can be subtracted out, similar to what happens in a blocked experiment [@gilbert2024multilevel]. One way to take advantage of this is to fit a multilevel regression model to our raw simulation results with a random effect for dataset. -Here we fit such a model, taking advantage of the fact that bias is simply the average of the error across replicates. +We next fit such a model, taking advantage of the fact that bias is simply the average of the error across replicates. We first make a unique ID for each scenario and dataset, and then fit the model with a random effect for both. The first random effect allows for specific scenarios to have more or less bias beyond what our model predicts. The second random effect allows for a given dataset to have a larger or smaller error than expected, shared across the three estimators. @@ -595,7 +595,7 @@ The random variation for `simID` captures unexplained variation due to the inter It appears to be a trivial amount; almost all the variation is due to the dataset. This makes sense: each datasets is unbalanced due to random assignment, and that estimation error is part of the dataset random effect. -So far we haven't included any simulation factors: we are pushing variation across simulation into the random effect terms. We can instead include the simulation factors as fixed effects, to see how they impact bias. +So far we have not included any simulation factors: we are pushing variation across simulation into the random effect terms. We can instead include the simulation factors as fixed effects, to see how they impact bias. ```{r} M2 <- lmer( diff --git a/Designing-Simulations-in-R.toc b/Designing-Simulations-in-R.toc index b7aeb65..a71dcf1 100644 --- a/Designing-Simulations-in-R.toc +++ b/Designing-Simulations-in-R.toc @@ -1,282 +1,282 @@ -\contentsline {chapter}{Welcome}{9}{chapter*.2}% -\contentsline {section}{License}{10}{section*.3}% -\contentsline {section}{About the authors}{10}{section*.4}% -\contentsline {section}{Acknowledgements}{11}{section*.5}% -\contentsline {part}{I\hspace {1em}An Introductory Look}{13}{part.1}% -\contentsline {chapter}{\numberline {1}Introduction}{15}{chapter.1}% -\contentsline {section}{\numberline {1.1}Some of simulation's many uses}{16}{section.1.1}% -\contentsline {subsection}{\numberline {1.1.1}Comparing statistical approaches}{17}{subsection.1.1.1}% -\contentsline {subsection}{\numberline {1.1.2}Assessing performance of complex pipelines}{17}{subsection.1.1.2}% -\contentsline {subsection}{\numberline {1.1.3}Assessing performance under misspecification}{18}{subsection.1.1.3}% -\contentsline {subsection}{\numberline {1.1.4}Assessing the finite-sample performance of a statistical approach}{19}{subsection.1.1.4}% -\contentsline {subsection}{\numberline {1.1.5}Conducting Power Analyses}{19}{subsection.1.1.5}% -\contentsline {subsection}{\numberline {1.1.6}Simulating processess}{20}{subsection.1.1.6}% -\contentsline {section}{\numberline {1.2}The perils of simulation as evidence}{21}{section.1.2}% -\contentsline {section}{\numberline {1.3}Simulating to learn}{23}{section.1.3}% -\contentsline {section}{\numberline {1.4}Why R?}{24}{section.1.4}% -\contentsline {section}{\numberline {1.5}Organization of the text}{25}{section.1.5}% -\contentsline {chapter}{\numberline {2}Programming Preliminaries}{27}{chapter.2}% -\contentsline {section}{\numberline {2.1}Welcome to the tidyverse}{27}{section.2.1}% -\contentsline {section}{\numberline {2.2}Functions}{28}{section.2.2}% -\contentsline {subsection}{\numberline {2.2.1}Rolling your own}{29}{subsection.2.2.1}% -\contentsline {subsection}{\numberline {2.2.2}A dangerous function}{30}{subsection.2.2.2}% -\contentsline {subsection}{\numberline {2.2.3}Using Named Arguments}{33}{subsection.2.2.3}% -\contentsline {subsection}{\numberline {2.2.4}Argument Defaults}{34}{subsection.2.2.4}% -\contentsline {subsection}{\numberline {2.2.5}Function skeletons}{35}{subsection.2.2.5}% -\contentsline {section}{\numberline {2.3}\texttt {\textbackslash {}\textgreater {}} (Pipe) dreams}{35}{section.2.3}% -\contentsline {section}{\numberline {2.4}Recipes versus Patterns}{36}{section.2.4}% -\contentsline {section}{\numberline {2.5}Exercises}{37}{section.2.5}% -\contentsline {chapter}{\numberline {3}An initial simulation}{39}{chapter.3}% -\contentsline {section}{\numberline {3.1}Simulating a single scenario}{42}{section.3.1}% -\contentsline {section}{\numberline {3.2}A non-normal population distribution}{43}{section.3.2}% -\contentsline {section}{\numberline {3.3}Simulating across different scenarios}{45}{section.3.3}% -\contentsline {section}{\numberline {3.4}Extending the simulation design}{48}{section.3.4}% -\contentsline {section}{\numberline {3.5}Exercises}{48}{section.3.5}% -\contentsline {part}{II\hspace {1em}Structure and Mechanics of a Simulation Study}{51}{part.2}% -\contentsline {chapter}{\numberline {4}Structure of a simulation study}{53}{chapter.4}% -\contentsline {section}{\numberline {4.1}General structure of a simulation}{53}{section.4.1}% -\contentsline {section}{\numberline {4.2}Tidy, modular simulations}{55}{section.4.2}% -\contentsline {section}{\numberline {4.3}Skeleton of a simulation study}{56}{section.4.3}% -\contentsline {subsection}{\numberline {4.3.1}Data-Generating Process}{58}{subsection.4.3.1}% -\contentsline {subsection}{\numberline {4.3.2}Data Analysis Procedure}{59}{subsection.4.3.2}% -\contentsline {subsection}{\numberline {4.3.3}Repetition}{59}{subsection.4.3.3}% -\contentsline {subsection}{\numberline {4.3.4}Performance summaries}{61}{subsection.4.3.4}% -\contentsline {subsection}{\numberline {4.3.5}Multifactor simulations}{61}{subsection.4.3.5}% -\contentsline {section}{\numberline {4.4}Exercises}{62}{section.4.4}% -\contentsline {chapter}{\numberline {5}Case Study: Heteroskedastic ANOVA and Welch}{63}{chapter.5}% -\contentsline {section}{\numberline {5.1}The data-generating model}{66}{section.5.1}% -\contentsline {subsection}{\numberline {5.1.1}Now make a function}{68}{subsection.5.1.1}% -\contentsline {subsection}{\numberline {5.1.2}Cautious coding}{69}{subsection.5.1.2}% -\contentsline {section}{\numberline {5.2}The hypothesis testing procedures}{70}{section.5.2}% -\contentsline {section}{\numberline {5.3}Running the simulation}{71}{section.5.3}% -\contentsline {section}{\numberline {5.4}Summarizing test performance}{72}{section.5.4}% -\contentsline {section}{\numberline {5.5}Exercises}{74}{section.5.5}% -\contentsline {subsection}{\numberline {5.5.1}Other \(\alpha \)'s}{74}{subsection.5.5.1}% -\contentsline {subsection}{\numberline {5.5.2}Compare results}{74}{subsection.5.5.2}% -\contentsline {subsection}{\numberline {5.5.3}Power}{75}{subsection.5.5.3}% -\contentsline {subsection}{\numberline {5.5.4}Wide or long?}{75}{subsection.5.5.4}% -\contentsline {subsection}{\numberline {5.5.5}Other tests}{76}{subsection.5.5.5}% -\contentsline {subsection}{\numberline {5.5.6}Methodological extensions}{76}{subsection.5.5.6}% -\contentsline {subsection}{\numberline {5.5.7}Power analysis}{76}{subsection.5.5.7}% -\contentsline {chapter}{\numberline {6}Data-generating processes}{77}{chapter.6}% -\contentsline {section}{\numberline {6.1}Examples}{77}{section.6.1}% -\contentsline {subsection}{\numberline {6.1.1}Example 1: One-way analysis of variance}{78}{subsection.6.1.1}% -\contentsline {subsection}{\numberline {6.1.2}Example 2: Bivariate Poisson model}{78}{subsection.6.1.2}% -\contentsline {subsection}{\numberline {6.1.3}Example 3: Hierarchical linear model for a cluster-randomized trial}{79}{subsection.6.1.3}% -\contentsline {section}{\numberline {6.2}Components of a DGP}{79}{section.6.2}% -\contentsline {section}{\numberline {6.3}A statistical model is a recipe for data generation}{82}{section.6.3}% -\contentsline {section}{\numberline {6.4}Plot the artificial data}{84}{section.6.4}% -\contentsline {section}{\numberline {6.5}Check the data-generating function}{86}{section.6.5}% -\contentsline {section}{\numberline {6.6}Example: Simulating clustered data}{87}{section.6.6}% -\contentsline {subsection}{\numberline {6.6.1}A design decision: What do we want to manipulate?}{88}{subsection.6.6.1}% -\contentsline {subsection}{\numberline {6.6.2}A model for a cluster RCT}{89}{subsection.6.6.2}% -\contentsline {subsection}{\numberline {6.6.3}From equations to code}{91}{subsection.6.6.3}% -\contentsline {subsection}{\numberline {6.6.4}Standardization in the DGP}{94}{subsection.6.6.4}% -\contentsline {section}{\numberline {6.7}Sometimes a DGP is all you need}{96}{section.6.7}% -\contentsline {section}{\numberline {6.8}More to explore}{101}{section.6.8}% -\contentsline {section}{\numberline {6.9}Exercises}{102}{section.6.9}% -\contentsline {subsection}{\numberline {6.9.1}The Welch test on a shifted-and-scaled \(t\) distribution}{102}{subsection.6.9.1}% -\contentsline {subsection}{\numberline {6.9.2}Plot the bivariate Poisson}{102}{subsection.6.9.2}% -\contentsline {subsection}{\numberline {6.9.3}Check the bivariate Poisson function}{103}{subsection.6.9.3}% -\contentsline {subsection}{\numberline {6.9.4}Add error-catching to the bivariate Poisson function}{103}{subsection.6.9.4}% -\contentsline {subsection}{\numberline {6.9.5}A bivariate negative binomial distribution}{104}{subsection.6.9.5}% -\contentsline {subsection}{\numberline {6.9.6}Another bivariate negative binomial distribution}{105}{subsection.6.9.6}% -\contentsline {subsection}{\numberline {6.9.7}Plot the data from a cluster-randomized trial}{105}{subsection.6.9.7}% -\contentsline {subsection}{\numberline {6.9.8}Checking the Cluster RCT DGP}{105}{subsection.6.9.8}% -\contentsline {subsection}{\numberline {6.9.9}More school-level variation}{106}{subsection.6.9.9}% -\contentsline {subsection}{\numberline {6.9.10}Cluster-randomized trial with baseline predictors}{106}{subsection.6.9.10}% -\contentsline {subsection}{\numberline {6.9.11}3-parameter IRT datasets}{106}{subsection.6.9.11}% -\contentsline {subsection}{\numberline {6.9.12}Check the 3-parameter IRT DGP}{107}{subsection.6.9.12}% -\contentsline {subsection}{\numberline {6.9.13}Explore the 3-parameter IRT model}{108}{subsection.6.9.13}% -\contentsline {subsection}{\numberline {6.9.14}Random effects meta-regression}{108}{subsection.6.9.14}% -\contentsline {subsection}{\numberline {6.9.15}Meta-regression with selective reporting}{109}{subsection.6.9.15}% -\contentsline {chapter}{\numberline {7}Data analysis procedures}{111}{chapter.7}% -\contentsline {section}{\numberline {7.1}Writing estimation functions}{112}{section.7.1}% -\contentsline {section}{\numberline {7.2}Including Multiple Data Analysis Procedures}{114}{section.7.2}% -\contentsline {section}{\numberline {7.3}Validating an Estimation Function}{119}{section.7.3}% -\contentsline {subsection}{\numberline {7.3.1}Checking against existing implementations}{119}{subsection.7.3.1}% -\contentsline {subsection}{\numberline {7.3.2}Checking novel procedures}{121}{subsection.7.3.2}% -\contentsline {subsection}{\numberline {7.3.3}Checking with simulations}{124}{subsection.7.3.3}% -\contentsline {section}{\numberline {7.4}Handling errors, warnings, and other hiccups}{125}{section.7.4}% -\contentsline {subsection}{\numberline {7.4.1}Capturing errors and warnings}{126}{subsection.7.4.1}% -\contentsline {subsection}{\numberline {7.4.2}Adapting estimation procedures for errors and warnings}{133}{subsection.7.4.2}% -\contentsline {section}{\numberline {7.5}Exercises}{136}{section.7.5}% -\contentsline {subsection}{\numberline {7.5.1}More Heteroskedastic ANOVA}{136}{subsection.7.5.1}% -\contentsline {subsection}{\numberline {7.5.2}Contingent testing}{136}{subsection.7.5.2}% -\contentsline {subsection}{\numberline {7.5.3}Check the cluster-RCT functions}{137}{subsection.7.5.3}% -\contentsline {subsection}{\numberline {7.5.4}Extending the cluster-RCT functions}{137}{subsection.7.5.4}% -\contentsline {subsection}{\numberline {7.5.5}Contingent estimator processing}{138}{subsection.7.5.5}% -\contentsline {subsection}{\numberline {7.5.6}Estimating 3-parameter item response theory models}{138}{subsection.7.5.6}% -\contentsline {subsection}{\numberline {7.5.7}Meta-regression with selective reporting}{139}{subsection.7.5.7}% -\contentsline {chapter}{\numberline {8}Running the Simulation Process}{143}{chapter.8}% -\contentsline {section}{\numberline {8.1}Repeating oneself}{143}{section.8.1}% -\contentsline {section}{\numberline {8.2}One run at a time}{144}{section.8.2}% -\contentsline {subsection}{\numberline {8.2.1}Reparameterizing}{147}{subsection.8.2.1}% -\contentsline {section}{\numberline {8.3}Bundling simulations with \texttt {simhelpers}}{148}{section.8.3}% -\contentsline {section}{\numberline {8.4}Seeds and pseudo-random number generators}{150}{section.8.4}% -\contentsline {section}{\numberline {8.5}Exercises}{153}{section.8.5}% -\contentsline {subsection}{\numberline {8.5.1}Welch simulations}{153}{subsection.8.5.1}% -\contentsline {subsection}{\numberline {8.5.2}Compare sampling distributions of Pearson's correlation coefficients}{153}{subsection.8.5.2}% -\contentsline {subsection}{\numberline {8.5.3}Reparameterization, redux}{153}{subsection.8.5.3}% -\contentsline {subsection}{\numberline {8.5.4}Fancy clustered RCT simulations}{153}{subsection.8.5.4}% -\contentsline {chapter}{\numberline {9}Performance metrics}{155}{chapter.9}% -\contentsline {section}{\numberline {9.1}Metrics for Point Estimators}{157}{section.9.1}% -\contentsline {subsection}{\numberline {9.1.1}Comparing the Performance of the Cluster RCT Estimation Procedures}{159}{subsection.9.1.1}% -\contentsline {subsubsection}{Are the estimators biased?}{160}{section*.12}% -\contentsline {subsubsection}{Which method has the smallest standard error?}{161}{section*.13}% -\contentsline {subsubsection}{Which method has the smallest Root Mean Squared Error?}{161}{section*.14}% -\contentsline {subsection}{\numberline {9.1.2}Less Conventional Performance metrics}{162}{subsection.9.1.2}% -\contentsline {section}{\numberline {9.2}Metrics for Standard Error Estimators}{164}{section.9.2}% -\contentsline {subsection}{\numberline {9.2.1}Satterthwaite degrees of freedom}{166}{subsection.9.2.1}% -\contentsline {subsection}{\numberline {9.2.2}Assessing SEs for the Cluster RCT Simulation}{167}{subsection.9.2.2}% -\contentsline {section}{\numberline {9.3}Metrics for Confidence Intervals}{168}{section.9.3}% -\contentsline {subsection}{\numberline {9.3.1}Confidence Intervals in the Cluster RCT Simulation}{169}{subsection.9.3.1}% -\contentsline {section}{\numberline {9.4}Metrics for Inferential Procedures (Hypothesis Tests)}{170}{section.9.4}% -\contentsline {subsection}{\numberline {9.4.1}Validity}{171}{subsection.9.4.1}% -\contentsline {subsection}{\numberline {9.4.2}Power}{172}{subsection.9.4.2}% -\contentsline {subsection}{\numberline {9.4.3}The Rejection Rate}{172}{subsection.9.4.3}% -\contentsline {subsection}{\numberline {9.4.4}Inference in the Cluster RCT Simulation}{173}{subsection.9.4.4}% -\contentsline {section}{\numberline {9.5}Selecting Relative vs.~Absolute Metrics}{175}{section.9.5}% -\contentsline {section}{\numberline {9.6}Estimands Not Represented By a Parameter}{177}{section.9.6}% -\contentsline {section}{\numberline {9.7}Uncertainty in Performance Estimates (the Monte Carlo Standard Error)}{179}{section.9.7}% -\contentsline {subsection}{\numberline {9.7.1}MCSE for Relative Variance Estimators}{181}{subsection.9.7.1}% -\contentsline {subsection}{\numberline {9.7.2}Calculating MCSEs With the \texttt {simhelpers} Package}{182}{subsection.9.7.2}% -\contentsline {subsection}{\numberline {9.7.3}MCSE Calculation in our Cluster RCT Example}{183}{subsection.9.7.3}% -\contentsline {section}{\numberline {9.8}Summary of Peformance Measures}{184}{section.9.8}% -\contentsline {section}{\numberline {9.9}Concluding thoughts}{185}{section.9.9}% -\contentsline {section}{\numberline {9.10}Exercises}{185}{section.9.10}% -\contentsline {subsection}{\numberline {9.10.1}Brown and Forsythe (1974)}{185}{subsection.9.10.1}% -\contentsline {subsection}{\numberline {9.10.2}Better confidence intervals}{185}{subsection.9.10.2}% -\contentsline {subsection}{\numberline {9.10.3}Cluster RCT simulation under a strong null hypothesis}{186}{subsection.9.10.3}% -\contentsline {subsection}{\numberline {9.10.4}Jackknife calculation of MCSEs}{186}{subsection.9.10.4}% -\contentsline {subsection}{\numberline {9.10.5}Distribution theory for person-level average treatment effects}{186}{subsection.9.10.5}% -\contentsline {subsection}{\numberline {9.10.6}Multiple scenarios}{186}{subsection.9.10.6}% -\contentsline {part}{III\hspace {1em}Multifactor Simulations}{189}{part.3}% -\contentsline {chapter}{\numberline {10}Designing and executing multifactor simulations}{191}{chapter.10}% -\contentsline {section}{\numberline {10.1}Choosing parameter combinations}{193}{section.10.1}% -\contentsline {section}{\numberline {10.2}Using pmap to run multifactor simulations}{195}{section.10.2}% -\contentsline {section}{\numberline {10.3}When to calculate performance metrics}{200}{section.10.3}% -\contentsline {subsection}{\numberline {10.3.1}Aggregate as you simulate (inside)}{200}{subsection.10.3.1}% -\contentsline {subsection}{\numberline {10.3.2}Keep all simulation runs (outside)}{200}{subsection.10.3.2}% -\contentsline {subsection}{\numberline {10.3.3}Getting raw results ready for analysis}{202}{subsection.10.3.3}% -\contentsline {section}{\numberline {10.4}Summary}{204}{section.10.4}% -\contentsline {section}{\numberline {10.5}Case Study: A multifactor evaluation of cluster RCT estimators}{205}{section.10.5}% -\contentsline {subsection}{\numberline {10.5.1}Choosing parameters for the Clustered RCT}{205}{subsection.10.5.1}% -\contentsline {subsection}{\numberline {10.5.2}Redundant factor combinations}{207}{subsection.10.5.2}% -\contentsline {subsection}{\numberline {10.5.3}Running the simulations}{207}{subsection.10.5.3}% -\contentsline {subsection}{\numberline {10.5.4}Calculating performance metrics}{208}{subsection.10.5.4}% -\contentsline {section}{\numberline {10.6}Exercises}{210}{section.10.6}% -\contentsline {subsection}{\numberline {10.6.1}Brown and Forsythe redux}{210}{subsection.10.6.1}% -\contentsline {subsection}{\numberline {10.6.2}Meta-regression}{210}{subsection.10.6.2}% -\contentsline {subsection}{\numberline {10.6.3}Comparing the trimmed mean, median and mean}{210}{subsection.10.6.3}% -\contentsline {chapter}{\numberline {11}Exploring and presenting simulation results}{213}{chapter.11}% -\contentsline {section}{\numberline {11.1}Tabulation}{214}{section.11.1}% -\contentsline {subsection}{\numberline {11.1.1}Example: estimators of treatment variation}{216}{subsection.11.1.1}% -\contentsline {section}{\numberline {11.2}Visualization}{217}{section.11.2}% -\contentsline {subsection}{\numberline {11.2.1}Example 0: RMSE in Cluster RCTs}{218}{subsection.11.2.1}% -\contentsline {subsection}{\numberline {11.2.2}Example 1: Biserial correlation estimation}{219}{subsection.11.2.2}% -\contentsline {subsection}{\numberline {11.2.3}Example 2: Variance estimation and Meta-regression}{220}{subsection.11.2.3}% -\contentsline {subsection}{\numberline {11.2.4}Example 3: Heat maps of coverage}{220}{subsection.11.2.4}% -\contentsline {subsection}{\numberline {11.2.5}Example 4: Relative performance of treatment effect estimators}{222}{subsection.11.2.5}% -\contentsline {section}{\numberline {11.3}Modeling}{223}{section.11.3}% -\contentsline {subsection}{\numberline {11.3.1}Example 1: Biserial, revisited}{224}{subsection.11.3.1}% -\contentsline {subsection}{\numberline {11.3.2}Example 2: Comparing methods for cross-classified data}{225}{subsection.11.3.2}% -\contentsline {section}{\numberline {11.4}Reporting}{227}{section.11.4}% -\contentsline {chapter}{\numberline {12}Building good visualizations}{229}{chapter.12}% -\contentsline {section}{\numberline {12.1}Subsetting and Many Small Multiples}{230}{section.12.1}% -\contentsline {section}{\numberline {12.2}Bundling}{233}{section.12.2}% -\contentsline {section}{\numberline {12.3}Aggregation}{236}{section.12.3}% -\contentsline {subsubsection}{\numberline {12.3.0.1}A note on how to aggregate}{238}{subsubsection.12.3.0.1}% -\contentsline {section}{\numberline {12.4}Assessing true SEs}{239}{section.12.4}% -\contentsline {subsubsection}{\numberline {12.4.0.1}Standardizing to compare across simulation scenarios}{241}{subsubsection.12.4.0.1}% -\contentsline {section}{\numberline {12.5}The Bias-SE-RMSE plot}{245}{section.12.5}% -\contentsline {section}{\numberline {12.6}Assessing estimated SEs}{246}{section.12.6}% -\contentsline {section}{\numberline {12.7}Assessing confidence intervals}{249}{section.12.7}% -\contentsline {section}{\numberline {12.8}Exercises}{252}{section.12.8}% -\contentsline {subsection}{\numberline {12.8.1}Assessing uncertainty}{252}{subsection.12.8.1}% -\contentsline {subsection}{\numberline {12.8.2}Assessing power}{252}{subsection.12.8.2}% -\contentsline {subsection}{\numberline {12.8.3}Going deeper with coverage}{252}{subsection.12.8.3}% -\contentsline {subsection}{\numberline {12.8.4}Pearson correlations with a bivariate Poisson distribution}{252}{subsection.12.8.4}% -\contentsline {chapter}{\numberline {13}Special Topics on Reporting Simulation Results}{253}{chapter.13}% -\contentsline {section}{\numberline {13.1}Using regression to analyze simulation results}{253}{section.13.1}% -\contentsline {subsection}{\numberline {13.1.1}Example 1: Biserial, revisited}{253}{subsection.13.1.1}% -\contentsline {subsection}{\numberline {13.1.2}Example 2: Cluster RCT example, revisited}{256}{subsection.13.1.2}% -\contentsline {subsubsection}{\numberline {13.1.2.1}Using LASSO to simplify the model}{258}{subsubsection.13.1.2.1}% -\contentsline {subsubsection}{\numberline {13.1.2.2}Fitting models to each method}{261}{subsubsection.13.1.2.2}% -\contentsline {section}{\numberline {13.2}Using regression trees to find important factors}{265}{section.13.2}% -\contentsline {section}{\numberline {13.3}Analyzing results with few iterations per scenario}{267}{section.13.3}% -\contentsline {subsection}{\numberline {13.3.1}Example: ClusterRCT with only 100 replicates per scenario}{268}{subsection.13.3.1}% -\contentsline {section}{\numberline {13.4}What to do with warnings in simulations}{272}{section.13.4}% -\contentsline {chapter}{\numberline {14}Case study: Comparing different estimators}{277}{chapter.14}% -\contentsline {section}{\numberline {14.1}Bias-variance tradeoffs}{280}{section.14.1}% -\contentsline {chapter}{\numberline {15}Simulations as evidence}{285}{chapter.15}% -\contentsline {section}{\numberline {15.1}Strategies for making relevant simulations}{286}{section.15.1}% -\contentsline {subsection}{\numberline {15.1.1}Break symmetries and regularities}{286}{subsection.15.1.1}% -\contentsline {subsection}{\numberline {15.1.2}Make your simulation general with an extensive multi-factor experiment}{287}{subsection.15.1.2}% -\contentsline {subsection}{\numberline {15.1.3}Use previously published simulations to beat them at their own game}{287}{subsection.15.1.3}% -\contentsline {subsection}{\numberline {15.1.4}Calibrate simulation factors to real data}{287}{subsection.15.1.4}% -\contentsline {subsection}{\numberline {15.1.5}Use real data to obtain directly}{288}{subsection.15.1.5}% -\contentsline {subsection}{\numberline {15.1.6}Fully calibrated simulations}{288}{subsection.15.1.6}% -\contentsline {part}{IV\hspace {1em}Computational Considerations}{291}{part.4}% -\contentsline {chapter}{\numberline {16}Organizing a simulation project}{293}{chapter.16}% -\contentsline {section}{\numberline {16.1}Well structured R scripts}{294}{section.16.1}% -\contentsline {subsection}{\numberline {16.1.1}The source command}{294}{subsection.16.1.1}% -\contentsline {subsection}{\numberline {16.1.2}Putting headers in your .R file}{295}{subsection.16.1.2}% -\contentsline {subsection}{\numberline {16.1.3}Storing testing code in your scripts}{296}{subsection.16.1.3}% -\contentsline {section}{\numberline {16.2}Principled directory structures}{296}{section.16.2}% -\contentsline {section}{\numberline {16.3}Saving simulation results}{297}{section.16.3}% -\contentsline {subsection}{\numberline {16.3.1}Saving simulations in general}{297}{subsection.16.3.1}% -\contentsline {subsection}{\numberline {16.3.2}Saving simulations as you go}{298}{subsection.16.3.2}% -\contentsline {subsection}{\numberline {16.3.3}Dynamically making directories}{301}{subsection.16.3.3}% -\contentsline {subsection}{\numberline {16.3.4}Loading and combining files of simulation results}{302}{subsection.16.3.4}% -\contentsline {chapter}{\numberline {17}Parallel Processing}{303}{chapter.17}% -\contentsline {section}{\numberline {17.1}Parallel on your computer}{304}{section.17.1}% -\contentsline {section}{\numberline {17.2}Parallel on a virtual machine}{305}{section.17.2}% -\contentsline {section}{\numberline {17.3}Parallel on a cluster}{306}{section.17.3}% -\contentsline {subsection}{\numberline {17.3.1}What is a command-line interface?}{306}{subsection.17.3.1}% -\contentsline {subsection}{\numberline {17.3.2}Running a job on a cluster}{308}{subsection.17.3.2}% -\contentsline {subsection}{\numberline {17.3.3}Checking on a job}{310}{subsection.17.3.3}% -\contentsline {subsection}{\numberline {17.3.4}Running lots of jobs on a cluster}{311}{subsection.17.3.4}% -\contentsline {subsection}{\numberline {17.3.5}Resources for Harvard's Odyssey}{313}{subsection.17.3.5}% -\contentsline {subsection}{\numberline {17.3.6}Acknowledgements}{314}{subsection.17.3.6}% -\contentsline {chapter}{\numberline {18}Debugging and Testing}{315}{chapter.18}% -\contentsline {section}{\numberline {18.1}Debugging with \texttt {print()}}{315}{section.18.1}% -\contentsline {section}{\numberline {18.2}Debugging with \texttt {browser()}}{316}{section.18.2}% -\contentsline {section}{\numberline {18.3}Debugging with \texttt {debug()}}{317}{section.18.3}% -\contentsline {section}{\numberline {18.4}Protecting functions with \texttt {stop()}}{317}{section.18.4}% -\contentsline {section}{\numberline {18.5}Testing code}{319}{section.18.5}% -\contentsline {part}{V\hspace {1em}Complex Data Structures}{323}{part.5}% -\contentsline {chapter}{\numberline {19}Using simulation as a power calculator}{325}{chapter.19}% -\contentsline {section}{\numberline {19.1}Getting design parameters from pilot data}{326}{section.19.1}% -\contentsline {section}{\numberline {19.2}The data generating process}{327}{section.19.2}% -\contentsline {section}{\numberline {19.3}Running the simulation}{331}{section.19.3}% -\contentsline {section}{\numberline {19.4}Evaluating power}{332}{section.19.4}% -\contentsline {subsection}{\numberline {19.4.1}Checking validity of our models}{332}{subsection.19.4.1}% -\contentsline {subsection}{\numberline {19.4.2}Assessing Precision (SE)}{335}{subsection.19.4.2}% -\contentsline {subsection}{\numberline {19.4.3}Assessing power}{335}{subsection.19.4.3}% -\contentsline {subsection}{\numberline {19.4.4}Assessing Minimum Detectable Effects}{336}{subsection.19.4.4}% -\contentsline {section}{\numberline {19.5}Power for Multilevel Data}{337}{section.19.5}% -\contentsline {chapter}{\numberline {20}Simulation under the Potential Outcomes Framework}{341}{chapter.20}% -\contentsline {section}{\numberline {20.1}Finite vs.~Superpopulation inference}{342}{section.20.1}% -\contentsline {section}{\numberline {20.2}Data generation processes for potential outcomes}{342}{section.20.2}% -\contentsline {section}{\numberline {20.3}Finite sample performance measures}{345}{section.20.3}% -\contentsline {section}{\numberline {20.4}Nested finite simulation procedure}{348}{section.20.4}% -\contentsline {chapter}{\numberline {21}The Parametric bootstrap}{353}{chapter.21}% -\contentsline {section}{\numberline {21.1}Air conditioners: a stolen case study}{354}{section.21.1}% -\contentsline {chapter}{\numberline {A}Coding Reference}{357}{appendix.A}% -\contentsline {section}{\numberline {A.1}How to repeat yourself}{357}{section.A.1}% -\contentsline {subsection}{\numberline {A.1.1}Using \texttt {replicate()}}{357}{subsection.A.1.1}% -\contentsline {subsection}{\numberline {A.1.2}Using \texttt {map()}}{359}{subsection.A.1.2}% -\contentsline {subsection}{\numberline {A.1.3}map with no inputs}{360}{subsection.A.1.3}% -\contentsline {subsection}{\numberline {A.1.4}Other approaches for repetition}{361}{subsection.A.1.4}% -\contentsline {section}{\numberline {A.2}Default arguments for functions}{361}{section.A.2}% -\contentsline {section}{\numberline {A.3}Profiling Code}{363}{section.A.3}% -\contentsline {subsection}{\numberline {A.3.1}Using \texttt {Sys.time()} and \texttt {system.time()}}{363}{subsection.A.3.1}% -\contentsline {subsection}{\numberline {A.3.2}The \texttt {tictoc} package}{364}{subsection.A.3.2}% -\contentsline {subsection}{\numberline {A.3.3}The \texttt {bench} package}{364}{subsection.A.3.3}% -\contentsline {subsection}{\numberline {A.3.4}Profiling with \texttt {profvis}}{367}{subsection.A.3.4}% -\contentsline {section}{\numberline {A.4}Optimizing code (and why you often shouldn't)}{367}{section.A.4}% -\contentsline {subsection}{\numberline {A.4.1}Hand-building functions}{368}{subsection.A.4.1}% -\contentsline {subsection}{\numberline {A.4.2}Computational efficiency versus simplicity}{369}{subsection.A.4.2}% -\contentsline {subsection}{\numberline {A.4.3}Reusing code to speed up computation}{370}{subsection.A.4.3}% -\contentsline {chapter}{\numberline {B}Further readings and resources}{377}{appendix.B}% +\contentsline {chapter}{Welcome}{7}{chapter*.2}% +\contentsline {section}{License}{8}{section*.3}% +\contentsline {section}{About the authors}{8}{section*.4}% +\contentsline {section}{Acknowledgements}{9}{section*.5}% +\contentsline {part}{I\hspace {1em}An Introductory Look}{11}{part.1}% +\contentsline {chapter}{\numberline {1}Introduction}{13}{chapter.1}% +\contentsline {section}{\numberline {1.1}Some of simulation's many uses}{14}{section.1.1}% +\contentsline {subsection}{\numberline {1.1.1}Comparing statistical approaches}{15}{subsection.1.1.1}% +\contentsline {subsection}{\numberline {1.1.2}Assessing performance of complex pipelines}{15}{subsection.1.1.2}% +\contentsline {subsection}{\numberline {1.1.3}Assessing performance under misspecification}{16}{subsection.1.1.3}% +\contentsline {subsection}{\numberline {1.1.4}Assessing the finite-sample performance of a statistical approach}{16}{subsection.1.1.4}% +\contentsline {subsection}{\numberline {1.1.5}Conducting Power Analyses}{17}{subsection.1.1.5}% +\contentsline {subsection}{\numberline {1.1.6}Simulating processess}{18}{subsection.1.1.6}% +\contentsline {section}{\numberline {1.2}The perils of simulation as evidence}{19}{section.1.2}% +\contentsline {section}{\numberline {1.3}Simulating to learn}{21}{section.1.3}% +\contentsline {section}{\numberline {1.4}Why R?}{22}{section.1.4}% +\contentsline {section}{\numberline {1.5}Organization of the text}{23}{section.1.5}% +\contentsline {chapter}{\numberline {2}Programming Preliminaries}{25}{chapter.2}% +\contentsline {section}{\numberline {2.1}Welcome to the tidyverse}{25}{section.2.1}% +\contentsline {section}{\numberline {2.2}Functions}{26}{section.2.2}% +\contentsline {subsection}{\numberline {2.2.1}Rolling your own}{26}{subsection.2.2.1}% +\contentsline {subsection}{\numberline {2.2.2}A dangerous function}{27}{subsection.2.2.2}% +\contentsline {subsection}{\numberline {2.2.3}Using Named Arguments}{30}{subsection.2.2.3}% +\contentsline {subsection}{\numberline {2.2.4}Argument Defaults}{31}{subsection.2.2.4}% +\contentsline {subsection}{\numberline {2.2.5}Function skeletons}{32}{subsection.2.2.5}% +\contentsline {section}{\numberline {2.3}\texttt {\textbackslash {}\textgreater {}} (Pipe) dreams}{32}{section.2.3}% +\contentsline {section}{\numberline {2.4}Recipes versus Patterns}{33}{section.2.4}% +\contentsline {section}{\numberline {2.5}Exercises}{34}{section.2.5}% +\contentsline {chapter}{\numberline {3}An initial simulation}{37}{chapter.3}% +\contentsline {section}{\numberline {3.1}Simulating a single scenario}{39}{section.3.1}% +\contentsline {section}{\numberline {3.2}A non-normal population distribution}{41}{section.3.2}% +\contentsline {section}{\numberline {3.3}Simulating across different scenarios}{42}{section.3.3}% +\contentsline {section}{\numberline {3.4}Extending the simulation design}{45}{section.3.4}% +\contentsline {section}{\numberline {3.5}Exercises}{46}{section.3.5}% +\contentsline {part}{II\hspace {1em}Structure and Mechanics of a Simulation Study}{49}{part.2}% +\contentsline {chapter}{\numberline {4}Structure of a simulation study}{51}{chapter.4}% +\contentsline {section}{\numberline {4.1}General structure of a simulation}{51}{section.4.1}% +\contentsline {section}{\numberline {4.2}Tidy, modular simulations}{53}{section.4.2}% +\contentsline {section}{\numberline {4.3}Skeleton of a simulation study}{54}{section.4.3}% +\contentsline {subsection}{\numberline {4.3.1}Data-Generating Process}{56}{subsection.4.3.1}% +\contentsline {subsection}{\numberline {4.3.2}Data Analysis Procedure}{56}{subsection.4.3.2}% +\contentsline {subsection}{\numberline {4.3.3}Repetition}{57}{subsection.4.3.3}% +\contentsline {subsection}{\numberline {4.3.4}Performance summaries}{58}{subsection.4.3.4}% +\contentsline {subsection}{\numberline {4.3.5}Multifactor simulations}{59}{subsection.4.3.5}% +\contentsline {section}{\numberline {4.4}Exercises}{60}{section.4.4}% +\contentsline {chapter}{\numberline {5}Case Study: Heteroskedastic ANOVA and Welch}{61}{chapter.5}% +\contentsline {section}{\numberline {5.1}The data-generating model}{64}{section.5.1}% +\contentsline {subsection}{\numberline {5.1.1}Now make a function}{66}{subsection.5.1.1}% +\contentsline {subsection}{\numberline {5.1.2}Cautious coding}{67}{subsection.5.1.2}% +\contentsline {section}{\numberline {5.2}The hypothesis testing procedures}{68}{section.5.2}% +\contentsline {section}{\numberline {5.3}Running the simulation}{69}{section.5.3}% +\contentsline {section}{\numberline {5.4}Summarizing test performance}{70}{section.5.4}% +\contentsline {section}{\numberline {5.5}Exercises}{72}{section.5.5}% +\contentsline {subsection}{\numberline {5.5.1}Other \(\alpha \)'s}{72}{subsection.5.5.1}% +\contentsline {subsection}{\numberline {5.5.2}Compare results}{72}{subsection.5.5.2}% +\contentsline {subsection}{\numberline {5.5.3}Power}{72}{subsection.5.5.3}% +\contentsline {subsection}{\numberline {5.5.4}Wide or long?}{72}{subsection.5.5.4}% +\contentsline {subsection}{\numberline {5.5.5}Other tests}{73}{subsection.5.5.5}% +\contentsline {subsection}{\numberline {5.5.6}Methodological extensions}{73}{subsection.5.5.6}% +\contentsline {subsection}{\numberline {5.5.7}Power analysis}{73}{subsection.5.5.7}% +\contentsline {chapter}{\numberline {6}Data-generating processes}{75}{chapter.6}% +\contentsline {section}{\numberline {6.1}Examples}{75}{section.6.1}% +\contentsline {subsection}{\numberline {6.1.1}Example 1: One-way analysis of variance}{76}{subsection.6.1.1}% +\contentsline {subsection}{\numberline {6.1.2}Example 2: Bivariate Poisson model}{76}{subsection.6.1.2}% +\contentsline {subsection}{\numberline {6.1.3}Example 3: Hierarchical linear model for a cluster-randomized trial}{76}{subsection.6.1.3}% +\contentsline {section}{\numberline {6.2}Components of a DGP}{77}{section.6.2}% +\contentsline {section}{\numberline {6.3}A statistical model is a recipe for data generation}{80}{section.6.3}% +\contentsline {section}{\numberline {6.4}Plot the artificial data}{82}{section.6.4}% +\contentsline {section}{\numberline {6.5}Check the data-generating function}{83}{section.6.5}% +\contentsline {section}{\numberline {6.6}Example: Simulating clustered data}{85}{section.6.6}% +\contentsline {subsection}{\numberline {6.6.1}A design decision: What do we want to manipulate?}{85}{subsection.6.6.1}% +\contentsline {subsection}{\numberline {6.6.2}A model for a cluster RCT}{86}{subsection.6.6.2}% +\contentsline {subsection}{\numberline {6.6.3}From equations to code}{89}{subsection.6.6.3}% +\contentsline {subsection}{\numberline {6.6.4}Standardization in the DGP}{91}{subsection.6.6.4}% +\contentsline {section}{\numberline {6.7}Sometimes a DGP is all you need}{93}{section.6.7}% +\contentsline {section}{\numberline {6.8}More to explore}{98}{section.6.8}% +\contentsline {section}{\numberline {6.9}Exercises}{98}{section.6.9}% +\contentsline {subsection}{\numberline {6.9.1}The Welch test on a shifted-and-scaled \(t\) distribution}{98}{subsection.6.9.1}% +\contentsline {subsection}{\numberline {6.9.2}Plot the bivariate Poisson}{99}{subsection.6.9.2}% +\contentsline {subsection}{\numberline {6.9.3}Check the bivariate Poisson function}{99}{subsection.6.9.3}% +\contentsline {subsection}{\numberline {6.9.4}Add error-catching to the bivariate Poisson function}{100}{subsection.6.9.4}% +\contentsline {subsection}{\numberline {6.9.5}A bivariate negative binomial distribution}{100}{subsection.6.9.5}% +\contentsline {subsection}{\numberline {6.9.6}Another bivariate negative binomial distribution}{101}{subsection.6.9.6}% +\contentsline {subsection}{\numberline {6.9.7}Plot the data from a cluster-randomized trial}{102}{subsection.6.9.7}% +\contentsline {subsection}{\numberline {6.9.8}Checking the Cluster RCT DGP}{102}{subsection.6.9.8}% +\contentsline {subsection}{\numberline {6.9.9}More school-level variation}{102}{subsection.6.9.9}% +\contentsline {subsection}{\numberline {6.9.10}Cluster-randomized trial with baseline predictors}{102}{subsection.6.9.10}% +\contentsline {subsection}{\numberline {6.9.11}3-parameter IRT datasets}{103}{subsection.6.9.11}% +\contentsline {subsection}{\numberline {6.9.12}Check the 3-parameter IRT DGP}{104}{subsection.6.9.12}% +\contentsline {subsection}{\numberline {6.9.13}Explore the 3-parameter IRT model}{104}{subsection.6.9.13}% +\contentsline {subsection}{\numberline {6.9.14}Random effects meta-regression}{104}{subsection.6.9.14}% +\contentsline {subsection}{\numberline {6.9.15}Meta-regression with selective reporting}{105}{subsection.6.9.15}% +\contentsline {chapter}{\numberline {7}Data analysis procedures}{107}{chapter.7}% +\contentsline {section}{\numberline {7.1}Writing estimation functions}{108}{section.7.1}% +\contentsline {section}{\numberline {7.2}Including Multiple Data Analysis Procedures}{110}{section.7.2}% +\contentsline {section}{\numberline {7.3}Validating an Estimation Function}{114}{section.7.3}% +\contentsline {subsection}{\numberline {7.3.1}Checking against existing implementations}{115}{subsection.7.3.1}% +\contentsline {subsection}{\numberline {7.3.2}Checking novel procedures}{116}{subsection.7.3.2}% +\contentsline {subsection}{\numberline {7.3.3}Checking with simulations}{119}{subsection.7.3.3}% +\contentsline {section}{\numberline {7.4}Handling errors, warnings, and other hiccups}{121}{section.7.4}% +\contentsline {subsection}{\numberline {7.4.1}Capturing errors and warnings}{121}{subsection.7.4.1}% +\contentsline {subsection}{\numberline {7.4.2}Adapting estimation procedures for errors and warnings}{127}{subsection.7.4.2}% +\contentsline {section}{\numberline {7.5}Exercises}{130}{section.7.5}% +\contentsline {subsection}{\numberline {7.5.1}More Heteroskedastic ANOVA}{130}{subsection.7.5.1}% +\contentsline {subsection}{\numberline {7.5.2}Contingent testing}{131}{subsection.7.5.2}% +\contentsline {subsection}{\numberline {7.5.3}Check the cluster-RCT functions}{131}{subsection.7.5.3}% +\contentsline {subsection}{\numberline {7.5.4}Extending the cluster-RCT functions}{131}{subsection.7.5.4}% +\contentsline {subsection}{\numberline {7.5.5}Contingent estimator processing}{132}{subsection.7.5.5}% +\contentsline {subsection}{\numberline {7.5.6}Estimating 3-parameter item response theory models}{132}{subsection.7.5.6}% +\contentsline {subsection}{\numberline {7.5.7}Meta-regression with selective reporting}{133}{subsection.7.5.7}% +\contentsline {chapter}{\numberline {8}Running the Simulation Process}{137}{chapter.8}% +\contentsline {section}{\numberline {8.1}Repeating oneself}{137}{section.8.1}% +\contentsline {section}{\numberline {8.2}One run at a time}{138}{section.8.2}% +\contentsline {subsection}{\numberline {8.2.1}Reparameterizing}{141}{subsection.8.2.1}% +\contentsline {section}{\numberline {8.3}Bundling simulations with \texttt {simhelpers}}{142}{section.8.3}% +\contentsline {section}{\numberline {8.4}Seeds and pseudo-random number generators}{143}{section.8.4}% +\contentsline {section}{\numberline {8.5}Exercises}{146}{section.8.5}% +\contentsline {subsection}{\numberline {8.5.1}Welch simulations}{146}{subsection.8.5.1}% +\contentsline {subsection}{\numberline {8.5.2}Compare sampling distributions of Pearson's correlation coefficients}{146}{subsection.8.5.2}% +\contentsline {subsection}{\numberline {8.5.3}Reparameterization, redux}{147}{subsection.8.5.3}% +\contentsline {subsection}{\numberline {8.5.4}Fancy clustered RCT simulations}{147}{subsection.8.5.4}% +\contentsline {chapter}{\numberline {9}Performance metrics}{149}{chapter.9}% +\contentsline {section}{\numberline {9.1}Metrics for Point Estimators}{151}{section.9.1}% +\contentsline {subsection}{\numberline {9.1.1}Comparing the Performance of the Cluster RCT Estimation Procedures}{153}{subsection.9.1.1}% +\contentsline {subsubsection}{Are the estimators biased?}{154}{section*.12}% +\contentsline {subsubsection}{Which method has the smallest standard error?}{154}{section*.13}% +\contentsline {subsubsection}{Which method has the smallest Root Mean Squared Error?}{155}{section*.14}% +\contentsline {subsection}{\numberline {9.1.2}Less Conventional Performance metrics}{156}{subsection.9.1.2}% +\contentsline {section}{\numberline {9.2}Metrics for Standard Error Estimators}{158}{section.9.2}% +\contentsline {subsection}{\numberline {9.2.1}Satterthwaite degrees of freedom}{160}{subsection.9.2.1}% +\contentsline {subsection}{\numberline {9.2.2}Assessing SEs for the Cluster RCT Simulation}{161}{subsection.9.2.2}% +\contentsline {section}{\numberline {9.3}Metrics for Confidence Intervals}{162}{section.9.3}% +\contentsline {subsection}{\numberline {9.3.1}Confidence Intervals in the Cluster RCT Simulation}{163}{subsection.9.3.1}% +\contentsline {section}{\numberline {9.4}Metrics for Inferential Procedures (Hypothesis Tests)}{164}{section.9.4}% +\contentsline {subsection}{\numberline {9.4.1}Validity}{165}{subsection.9.4.1}% +\contentsline {subsection}{\numberline {9.4.2}Power}{165}{subsection.9.4.2}% +\contentsline {subsection}{\numberline {9.4.3}The Rejection Rate}{166}{subsection.9.4.3}% +\contentsline {subsection}{\numberline {9.4.4}Inference in the Cluster RCT Simulation}{167}{subsection.9.4.4}% +\contentsline {section}{\numberline {9.5}Selecting Relative vs.\nobreakspace {}Absolute Metrics}{169}{section.9.5}% +\contentsline {section}{\numberline {9.6}Estimands Not Represented By a Parameter}{170}{section.9.6}% +\contentsline {section}{\numberline {9.7}Uncertainty in Performance Estimates (the Monte Carlo Standard Error)}{173}{section.9.7}% +\contentsline {subsection}{\numberline {9.7.1}MCSE for Relative Variance Estimators}{174}{subsection.9.7.1}% +\contentsline {subsection}{\numberline {9.7.2}Calculating MCSEs With the \texttt {simhelpers} Package}{175}{subsection.9.7.2}% +\contentsline {subsection}{\numberline {9.7.3}MCSE Calculation in our Cluster RCT Example}{176}{subsection.9.7.3}% +\contentsline {section}{\numberline {9.8}Summary of Peformance Measures}{177}{section.9.8}% +\contentsline {section}{\numberline {9.9}Concluding thoughts}{178}{section.9.9}% +\contentsline {section}{\numberline {9.10}Exercises}{178}{section.9.10}% +\contentsline {subsection}{\numberline {9.10.1}Brown and Forsythe (1974)}{178}{subsection.9.10.1}% +\contentsline {subsection}{\numberline {9.10.2}Better confidence intervals}{178}{subsection.9.10.2}% +\contentsline {subsection}{\numberline {9.10.3}Cluster RCT simulation under a strong null hypothesis}{179}{subsection.9.10.3}% +\contentsline {subsection}{\numberline {9.10.4}Jackknife calculation of MCSEs}{179}{subsection.9.10.4}% +\contentsline {subsection}{\numberline {9.10.5}Distribution theory for person-level average treatment effects}{179}{subsection.9.10.5}% +\contentsline {subsection}{\numberline {9.10.6}Multiple scenarios}{179}{subsection.9.10.6}% +\contentsline {part}{III\hspace {1em}Multifactor Simulations}{181}{part.3}% +\contentsline {chapter}{\numberline {10}Designing and executing multifactor simulations}{183}{chapter.10}% +\contentsline {section}{\numberline {10.1}Choosing parameter combinations}{185}{section.10.1}% +\contentsline {section}{\numberline {10.2}Using pmap to run multifactor simulations}{187}{section.10.2}% +\contentsline {section}{\numberline {10.3}When to calculate performance metrics}{191}{section.10.3}% +\contentsline {subsection}{\numberline {10.3.1}Aggregate as you simulate (inside)}{191}{subsection.10.3.1}% +\contentsline {subsection}{\numberline {10.3.2}Keep all simulation runs (outside)}{192}{subsection.10.3.2}% +\contentsline {subsection}{\numberline {10.3.3}Getting raw results ready for analysis}{193}{subsection.10.3.3}% +\contentsline {section}{\numberline {10.4}Summary}{195}{section.10.4}% +\contentsline {section}{\numberline {10.5}Case Study: A multifactor evaluation of cluster RCT estimators}{196}{section.10.5}% +\contentsline {subsection}{\numberline {10.5.1}Choosing parameters for the Clustered RCT}{196}{subsection.10.5.1}% +\contentsline {subsection}{\numberline {10.5.2}Redundant factor combinations}{198}{subsection.10.5.2}% +\contentsline {subsection}{\numberline {10.5.3}Running the simulations}{198}{subsection.10.5.3}% +\contentsline {subsection}{\numberline {10.5.4}Calculating performance metrics}{199}{subsection.10.5.4}% +\contentsline {section}{\numberline {10.6}Exercises}{200}{section.10.6}% +\contentsline {subsection}{\numberline {10.6.1}Brown and Forsythe redux}{200}{subsection.10.6.1}% +\contentsline {subsection}{\numberline {10.6.2}Meta-regression}{201}{subsection.10.6.2}% +\contentsline {subsection}{\numberline {10.6.3}Comparing the trimmed mean, median and mean}{201}{subsection.10.6.3}% +\contentsline {chapter}{\numberline {11}Exploring and presenting simulation results}{203}{chapter.11}% +\contentsline {section}{\numberline {11.1}Tabulation}{204}{section.11.1}% +\contentsline {subsection}{\numberline {11.1.1}Example: estimators of treatment variation}{206}{subsection.11.1.1}% +\contentsline {section}{\numberline {11.2}Visualization}{207}{section.11.2}% +\contentsline {subsection}{\numberline {11.2.1}Example 0: RMSE in Cluster RCTs}{208}{subsection.11.2.1}% +\contentsline {subsection}{\numberline {11.2.2}Example 1: Biserial correlation estimation}{209}{subsection.11.2.2}% +\contentsline {subsection}{\numberline {11.2.3}Example 2: Variance estimation and Meta-regression}{209}{subsection.11.2.3}% +\contentsline {subsection}{\numberline {11.2.4}Example 3: Heat maps of coverage}{210}{subsection.11.2.4}% +\contentsline {subsection}{\numberline {11.2.5}Example 4: Relative performance of treatment effect estimators}{211}{subsection.11.2.5}% +\contentsline {section}{\numberline {11.3}Modeling}{213}{section.11.3}% +\contentsline {subsection}{\numberline {11.3.1}Example 1: Biserial, revisited}{214}{subsection.11.3.1}% +\contentsline {subsection}{\numberline {11.3.2}Example 2: Comparing methods for cross-classified data}{215}{subsection.11.3.2}% +\contentsline {section}{\numberline {11.4}Reporting}{216}{section.11.4}% +\contentsline {chapter}{\numberline {12}Building good visualizations}{219}{chapter.12}% +\contentsline {section}{\numberline {12.1}Subsetting and Many Small Multiples}{220}{section.12.1}% +\contentsline {section}{\numberline {12.2}Bundling}{223}{section.12.2}% +\contentsline {section}{\numberline {12.3}Aggregation}{227}{section.12.3}% +\contentsline {subsubsection}{\numberline {12.3.0.1}Some notes on how to aggregate}{229}{subsubsection.12.3.0.1}% +\contentsline {section}{\numberline {12.4}Comparing true SEs with standardization}{230}{section.12.4}% +\contentsline {section}{\numberline {12.5}The Bias-SE-RMSE plot}{235}{section.12.5}% +\contentsline {section}{\numberline {12.6}Assessing the quality of the estimated SEs}{237}{section.12.6}% +\contentsline {subsection}{\numberline {12.6.1}Stability of estimated SEs}{239}{subsection.12.6.1}% +\contentsline {section}{\numberline {12.7}Assessing confidence intervals}{240}{section.12.7}% +\contentsline {section}{\numberline {12.8}Exercises}{242}{section.12.8}% +\contentsline {subsection}{\numberline {12.8.1}Assessing uncertainty}{242}{subsection.12.8.1}% +\contentsline {subsection}{\numberline {12.8.2}Assessing power}{242}{subsection.12.8.2}% +\contentsline {subsection}{\numberline {12.8.3}Going deeper with coverage}{242}{subsection.12.8.3}% +\contentsline {subsection}{\numberline {12.8.4}Pearson correlations with a bivariate Poisson distribution}{243}{subsection.12.8.4}% +\contentsline {subsection}{\numberline {12.8.5}Making another plot for assessing SEs}{243}{subsection.12.8.5}% +\contentsline {chapter}{\numberline {13}Special Topics on Reporting Simulation Results}{245}{chapter.13}% +\contentsline {section}{\numberline {13.1}Using regression to analyze simulation results}{245}{section.13.1}% +\contentsline {subsection}{\numberline {13.1.1}Example 1: Biserial, revisited}{245}{subsection.13.1.1}% +\contentsline {subsection}{\numberline {13.1.2}Example 2: Cluster RCT example, revisited}{248}{subsection.13.1.2}% +\contentsline {subsubsection}{\numberline {13.1.2.1}Using LASSO to simplify the model}{249}{subsubsection.13.1.2.1}% +\contentsline {section}{\numberline {13.2}Using regression trees to find important factors}{254}{section.13.2}% +\contentsline {section}{\numberline {13.3}Analyzing results with few iterations per scenario}{256}{section.13.3}% +\contentsline {subsection}{\numberline {13.3.1}Example: ClusterRCT with only 100 replicates per scenario}{257}{subsection.13.3.1}% +\contentsline {section}{\numberline {13.4}What to do with warnings in simulations}{263}{section.13.4}% +\contentsline {chapter}{\numberline {14}Case study: Comparing different estimators}{267}{chapter.14}% +\contentsline {section}{\numberline {14.1}Bias-variance tradeoffs}{270}{section.14.1}% +\contentsline {chapter}{\numberline {15}Simulations as evidence}{275}{chapter.15}% +\contentsline {section}{\numberline {15.1}Strategies for making relevant simulations}{276}{section.15.1}% +\contentsline {subsection}{\numberline {15.1.1}Break symmetries and regularities}{276}{subsection.15.1.1}% +\contentsline {subsection}{\numberline {15.1.2}Make your simulation general with an extensive multi-factor experiment}{277}{subsection.15.1.2}% +\contentsline {subsection}{\numberline {15.1.3}Use previously published simulations to beat them at their own game}{277}{subsection.15.1.3}% +\contentsline {subsection}{\numberline {15.1.4}Calibrate simulation factors to real data}{277}{subsection.15.1.4}% +\contentsline {subsection}{\numberline {15.1.5}Use real data to obtain directly}{277}{subsection.15.1.5}% +\contentsline {subsection}{\numberline {15.1.6}Fully calibrated simulations}{278}{subsection.15.1.6}% +\contentsline {part}{IV\hspace {1em}Computational Considerations}{281}{part.4}% +\contentsline {chapter}{\numberline {16}Organizing a simulation project}{283}{chapter.16}% +\contentsline {section}{\numberline {16.1}Well structured R scripts}{284}{section.16.1}% +\contentsline {subsection}{\numberline {16.1.1}The source command}{284}{subsection.16.1.1}% +\contentsline {subsection}{\numberline {16.1.2}Putting headers in your .R file}{285}{subsection.16.1.2}% +\contentsline {subsection}{\numberline {16.1.3}Storing testing code in your scripts}{286}{subsection.16.1.3}% +\contentsline {section}{\numberline {16.2}Principled directory structures}{286}{section.16.2}% +\contentsline {section}{\numberline {16.3}Saving simulation results}{287}{section.16.3}% +\contentsline {subsection}{\numberline {16.3.1}Saving simulations in general}{287}{subsection.16.3.1}% +\contentsline {subsection}{\numberline {16.3.2}Saving simulations as you go}{288}{subsection.16.3.2}% +\contentsline {subsection}{\numberline {16.3.3}Dynamically making directories}{291}{subsection.16.3.3}% +\contentsline {subsection}{\numberline {16.3.4}Loading and combining files of simulation results}{291}{subsection.16.3.4}% +\contentsline {chapter}{\numberline {17}Parallel Processing}{293}{chapter.17}% +\contentsline {section}{\numberline {17.1}Parallel on your computer}{294}{section.17.1}% +\contentsline {section}{\numberline {17.2}Parallel on a virtual machine}{295}{section.17.2}% +\contentsline {section}{\numberline {17.3}Parallel on a cluster}{295}{section.17.3}% +\contentsline {subsection}{\numberline {17.3.1}What is a command-line interface?}{296}{subsection.17.3.1}% +\contentsline {subsection}{\numberline {17.3.2}Running a job on a cluster}{298}{subsection.17.3.2}% +\contentsline {subsection}{\numberline {17.3.3}Checking on a job}{300}{subsection.17.3.3}% +\contentsline {subsection}{\numberline {17.3.4}Running lots of jobs on a cluster}{300}{subsection.17.3.4}% +\contentsline {subsection}{\numberline {17.3.5}Resources for Harvard's Odyssey}{303}{subsection.17.3.5}% +\contentsline {subsection}{\numberline {17.3.6}Acknowledgements}{303}{subsection.17.3.6}% +\contentsline {chapter}{\numberline {18}Debugging and Testing}{305}{chapter.18}% +\contentsline {section}{\numberline {18.1}Debugging with \texttt {print()}}{305}{section.18.1}% +\contentsline {section}{\numberline {18.2}Debugging with \texttt {browser()}}{306}{section.18.2}% +\contentsline {section}{\numberline {18.3}Debugging with \texttt {debug()}}{307}{section.18.3}% +\contentsline {section}{\numberline {18.4}Protecting functions with \texttt {stop()}}{307}{section.18.4}% +\contentsline {section}{\numberline {18.5}Testing code}{308}{section.18.5}% +\contentsline {part}{V\hspace {1em}Complex Data Structures}{313}{part.5}% +\contentsline {chapter}{\numberline {19}Using simulation as a power calculator}{315}{chapter.19}% +\contentsline {section}{\numberline {19.1}Getting design parameters from pilot data}{316}{section.19.1}% +\contentsline {section}{\numberline {19.2}The data generating process}{317}{section.19.2}% +\contentsline {section}{\numberline {19.3}Running the simulation}{321}{section.19.3}% +\contentsline {section}{\numberline {19.4}Evaluating power}{322}{section.19.4}% +\contentsline {subsection}{\numberline {19.4.1}Checking validity of our models}{322}{subsection.19.4.1}% +\contentsline {subsection}{\numberline {19.4.2}Assessing Precision (SE)}{324}{subsection.19.4.2}% +\contentsline {subsection}{\numberline {19.4.3}Assessing power}{325}{subsection.19.4.3}% +\contentsline {subsection}{\numberline {19.4.4}Assessing Minimum Detectable Effects}{326}{subsection.19.4.4}% +\contentsline {section}{\numberline {19.5}Power for Multilevel Data}{327}{section.19.5}% +\contentsline {chapter}{\numberline {20}Simulation under the Potential Outcomes Framework}{331}{chapter.20}% +\contentsline {section}{\numberline {20.1}Finite vs.\nobreakspace {}Superpopulation inference}{332}{section.20.1}% +\contentsline {section}{\numberline {20.2}Data generation processes for potential outcomes}{332}{section.20.2}% +\contentsline {section}{\numberline {20.3}Finite sample performance measures}{335}{section.20.3}% +\contentsline {section}{\numberline {20.4}Nested finite simulation procedure}{338}{section.20.4}% +\contentsline {chapter}{\numberline {21}The Parametric bootstrap}{343}{chapter.21}% +\contentsline {section}{\numberline {21.1}Air conditioners: a stolen case study}{344}{section.21.1}% +\contentsline {chapter}{\numberline {A}Coding Reference}{347}{appendix.A}% +\contentsline {section}{\numberline {A.1}How to repeat yourself}{347}{section.A.1}% +\contentsline {subsection}{\numberline {A.1.1}Using \texttt {replicate()}}{347}{subsection.A.1.1}% +\contentsline {subsection}{\numberline {A.1.2}Using \texttt {map()}}{348}{subsection.A.1.2}% +\contentsline {subsection}{\numberline {A.1.3}map with no inputs}{350}{subsection.A.1.3}% +\contentsline {subsection}{\numberline {A.1.4}Other approaches for repetition}{350}{subsection.A.1.4}% +\contentsline {section}{\numberline {A.2}Default arguments for functions}{351}{section.A.2}% +\contentsline {section}{\numberline {A.3}Profiling Code}{352}{section.A.3}% +\contentsline {subsection}{\numberline {A.3.1}Using \texttt {Sys.time()} and \texttt {system.time()}}{352}{subsection.A.3.1}% +\contentsline {subsection}{\numberline {A.3.2}The \texttt {tictoc} package}{353}{subsection.A.3.2}% +\contentsline {subsection}{\numberline {A.3.3}The \texttt {bench} package}{353}{subsection.A.3.3}% +\contentsline {subsection}{\numberline {A.3.4}Profiling with \texttt {profvis}}{356}{subsection.A.3.4}% +\contentsline {section}{\numberline {A.4}Optimizing code (and why you often shouldn't)}{356}{section.A.4}% +\contentsline {subsection}{\numberline {A.4.1}Hand-building functions}{357}{subsection.A.4.1}% +\contentsline {subsection}{\numberline {A.4.2}Computational efficiency versus simplicity}{358}{subsection.A.4.2}% +\contentsline {subsection}{\numberline {A.4.3}Reusing code to speed up computation}{360}{subsection.A.4.3}% +\contentsline {chapter}{\numberline {B}Further readings and resources}{365}{appendix.B}% diff --git a/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v1-1.pdf b/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v1-1.pdf index 82d64a77a0e5858b165fa607505c8c9a28b85e1d..9c5afeb8737d704d6cd691dd9589d1d958f2129c 100644 GIT binary patch delta 4740 zcmai$c|25a`^U3HG_sR@5Xm|-W*D*?OLnFZzA_k$-3%W4v1gfy>|2|J?6RklwFs3p zWv8;s)+pOU^?YB?@A>}mJFnOEI_F&1`@Zh$KCk=w+^0%q7gZ%rNezc0 zCSMatQIzCQ_nj-UNWB-9cNqF3cYp8Ofiv*K<>$|X&bR$dj}JnRc8ob`j}Y(QvMd6T zNxP~oA;H;`R!qC{_ftj{Wsl)Q#RwajM|rfSqUG+qR8T%O}pJ*VYYUzscpzg$TwV2IgMMF zh-?)0&WuxPM0I~3ZldserX_8IN~@(@bDYMxrWKar+;j4T%#^XfUv_H@EpRJ} z!B{N# z{k(mI)7EpcN={~%1c5roh$9KY7|FuHB*wPO(8_FFStMbtmJO6v!7QxOnkad`uplrG z(_?c$>4{v8pDXU0*6b)36rxyNU2D;G!idz#;6hQ&&-S32M2mOD zV?X)L(4gp?r7&&7KJ6aE_Vm1u1uO9J!NF$8%}1MP9}&ln^v1TH*ONysK^*VJ#F?!V zq`59}hbC|B&;VMPM|e%s2nMaDPO%>E`!17C4ObKB;YO4cBND}VD$*j@4r0wA|c6vlq=_vtaD<GiTB4!15ut>n8 z9~DpE&W}&xd?@Ddu;6Cv7dM+rAAbs8;ywd!B zFSlGYr(4|81lmjTr*D;6uR2*av+|{n^i}z167{hYATL~--yQVB-SZ=s&{>aa&)7Pi z`mK>-b4H7QLpQV;?!3&qRRdCmxjXn3Wlc*=-X_&6X~AU_F-`` z;I*>8r-DHG%#!jInx4~arb^dBsf6A<-)b&DHK$G1qXs`jAJ7s$J$rmvsT5s$D1^nV z#%6axTvg-2aeP;ocH*SS{pcF=vN37=M~V5Sfj zb!LUEXFP&qYj?hSl+!lbU%IvgeWtm13Ce}&Nb>{)MIXjyUHiXEL)6hKWH>069>%YCg zo$L)%>@=585hGTDyVT4bdeqd@BgXH89O01U&U9GvZJ%3d=1FOU@+pt^P9VoA0!_+c z3FyKHXfgva0;EHel2rn7EKYO4Av7H9=7RHwh!L}4cc=u422tJy&!>@QI2Fm_+PjZq zo!{5RSxm#+tUJ(WOK0P?%4Z+Pq;GWOX8;>RZA$_6KBc#czMH>h-Lraq49p(`^T)vE zF|aw*<{D^k>J}JwrO}{{<6`6E5p~xaqg@)_4iu@qP1Q{MCdzN4Z0PpO(7Nv=Y`+vM zsOPBld6sum@&X}-i~`1+uAH!5Q;eJ1$cI^k(_ZK|)$z;R(7B&4awUaktN{gzt6pty zUQWCOIU&a~21NthANp1x0}SpWWT+c2_6u=2$!DEX%G~uPjfDy9GhQraKD(OVEyVHy zp>8s1Izlqnf=;0PQ$#I`&vUbegy8O>1<|qgw?Q>L2B&VF{#vmiYW;w$ns;AYavKc4 zj%qU=2!Ez+5Fnszn`xbpCm)3oz%-oUFWG%!-9@-;PO&D{g$hLJQ7M0$%9{!&q6ITpK zmi_*Pj*v4vi{Hg;ANQ&o4MY>Ic~}G2DmqRdQYSiQP1sR_fsXc|Un@(Bmta1tr@;HV z1e+qFYN!~*qzT<{HG8kSVYXfh$CMCBS5lL4A8*6rwU2Am9qOoiMp1L1vMXR@E$rGg z99P8kuk-}*dpA>OsNIB*r+PdfFPi1^=?)!SNPD4M{A9Ay^ytgay1PH^39woSafk~V za@_s~7|x0st>pNy%X3-e*gdEpaqMkWd&rhcl*pP$F4mHW;j1Tp`XOLsBFquSQ5_e3 zuK-C~4rj1hMER;1dzeJI`6S;&Cq@l;4XdwJ4y!LVY=X(7oA)kV?vg(U_`tpN*FRTG>3Z2G?Z(7Ph94+5WPY{d^g3Djmf>cV)~6dn9{bCnWhSZ z;;SW2(M9Xe{7YD1F1^F2{9VN@a?J9@90kzHR43<<^6gaUnvjdN2Mh5IM((}y6>cYt#omy!=a z-1{s&F!{lXheCr|(5!q1C1;SS@@uvD{NzxBZH8nLd6JBxu-E9jcIx<}99{3OMniO# zWRNA0hT1hgurOUmA@$e?=P+T&*$vI0@TZ8DCv*x=JLW=ny z+VOEEbzED@n}MvdvY+meO!jX$evPYl5qZt=yA@4@+oBhoT#i+Sh4@rcE|9AbInqOt zo1qHE8H~gfq!{wBy-Sfy!T4+DbFdLD0#eC<%k!g|+Tpg1W=P_+(d0c`N_a5)LLpuM zj^BN`@^fT7g3iqXkK$$)M*bS6cRJ|he`<^`H8}2I!&y9y=C#%icHXk+2ZnAlNy8xW zCOt0g`7bStSg5lc!2H&-oYO^jxdq*g(P@~SU86^$j;&eA09MwORq_VTFkH^ZBi_0y z&O)}r&OZtl;glaa6D~F9fB33{{p^pGexY3}b`@|1yEk8X`n24f9&1m}2sPu~aTfVY zn!NXNiDL{&pvP*AS*zsMqpZd&blQ&`6D{?-X<4aBGV#jq>(RUmSS~}8oLY0wfEq=R zA%sRbk7d;WXtm;D1TPd&kNqmLoz4{%n22Rwwk~_6*RL*N7-g==eCI71KQwKIPn$X3 zRNHRdraw=tZeHx*#H{dXht?uG)gvhnEl2+mFD^aaaDSaZE0w1;l|inpCjF|i$#YfE z>1|h<6c0c^Tp_`>gCa&;04?VuDD~|f8yXS5HK2UBg<=Nq%rU$J9rM5YDosR1LH?K0 zbgUD(`z(1?f*VR$-Ut91YzDrkN;8u?5FWrm# zqkD;0Y@=|sKRH$kEql`v49;Zy^tWP2>0mxDGC@#scM<jfzYF20Z}qOXlw4X zu~*lI?>$?UQQ-c9pcEg;b+W$;d#%Hnn^SZQuT&ADC|y2pHgQ{L2)BEYn5ey_UJc!t zYKk)PMUE1icxk@>NF=N7lSPz<3}jv$yc$&n_BhrTG6!=l7(utO%Uh%>lY_5ar@ng2 zb5*l^+D@js%KGA1_THJVI9V$cC5i*Ef27Y}t_>d}VI=UNWyvCH%$C z2KtNfW#JOu?XHT8igQ~j7GH~OxzdBozSb!1g}BkqdRwpmrI>uKbD{~K&0nx)gKU_& z92?AeeExLuvKu#KO_He`h=A~_`njHjki*$ha>$upuNy{*QB*t2H-N*>iJX-$0s{Rw zlxAl8*i~IDYjG47&lSniW=kETNRmdUDstnD+YFMOd{ zBa2rgzcdwQH3S41Cc&a6mAQ;;-UoU}3FfPna0mi}uV9(9rMii4_MQ3I1_qa*C8e`R zo&7o^4EX|yYni7HsalSl$8Sx$5_wwIgOrJjE=1bI9q+o?F8lP3sn^Tb)YPR9bgRGA z)(Y2geonHHI2TALA5Uniu3(>YqHbs6@^`ybxml?bP-VubsxoEycx5*V60>hG1{$2p zeCcPEC85f$>Yt6AEWg$hPcQXU7nGZr6h+vl__!Pfr;TcMxG8`g7hx-_n-LIyNf2CH zR}CXiG&TW6Zpfh4sL|c|ny*c$!nfBq=~474vWVUa0D4?PcL6btdk5L=jkltfOW*VyJP-#y7+lIWAJ_+?ihAIUl}iFFHfwS zj05H*A;BBtfUVJE#-61>q*n1>r2A8}EF^WBm;1losQ9jnA;4l+v6f&1oIe(Pyh7Pa zU^j1!uL}W-0sCX|1e}*ASWa428j%{!r$wnCpZbPR0YF0F5HRHA0(*k#e?36p@^XlO z_!J=W|La3QAhQ4T$^Mgr$s_-lLqZfzIHvzIF%k+-P32d$h03B7G~oyhd5ElrrnUkU yp`fLSl!YVZVOk0rFqozaSn2;o{sOD0&=Y(yc;5g#)`=bohal-iM6`^x>HiIGPQG&h delta 3915 zcmai%c|4SB|Hli3p|WO1ma&s<7K3pZM`PcQV<}k@GxlYW?02>qWke>CeGQQY5iJT? z$C9!{#xf!yODS8+Gu3&1zt?%r>-9YMKkxhgUZ3lG{qgx;-|M<}^r;x?NftJwnmURy zEFcMVo;4eyH}TSET#L&eQ`HuO=$vm?eJHTCC7`~tP^-SDl>JEBFu12LYOsx}VW z|zoW_#X!F9QN>g@!Bty*(S*ZjEL>CT4rigSc`@b+V~vET5dDfU3W zc)KRIwQdhWyq&h*r-V*U`%U!L@R>YlSNE$S7vi4YtjNCc+rnG1Im(028QyCo4+0T5 zPP$sg5rQ;d&?rM-=cgV%-{n25eDPTR(^qnpw^v*^4law}rMO$HqjqWS;6KPLgN*% zjzErDfoV&O%MI<4a9Pr2@TBzy@9NSyAWFQIityLVi-=mZa@Z{|{+_3(EbFyw=w8iO zT*OqEPD%GG(%Pl*e0cAM4YYTHlR{;X#dEdscJ6xqf?SVOD`@2#e!;WF?M^vi z6@o_?DwRdNlf|A2+ocYTefI#c#lY;vJQVq>g|G5FH@Py-*dZmkMeV&M4eVvAkpB9p zUEKZT)530?xAk#L)|E2ckzn>CcR|vrb3epOcE7~F4)F=4rb5M{eVttOzQ;Z8oSl`B z1ym`ry%S}-V1m73Ru>z=`xGp;1{!Y;N=?^Bw%k!(Q4&|+zEBpYgJ!GRymhT2lK`Cd;vg~>llH~e{>>b+euVY%ncLM#^RqWjDhVHU-j21BnY8 zI!OYCRfb7@1+>)=+0GH}zWDqX&2g8X)Jz6u4M2+ft+m=AT&*UoF_(%jk9I=CPfnLy zA0QDWv4&@|j+hCny5;+KF=v5TbFN6$G|=zOD4j+0`I&XiPqdbi-}9~8n|(PgvhKdD zsgbL-me`^NN~xZwPB3w2p1RA#-FaX`!QIFM9%ioYY2-!-6H)gya;sw^uscC{+Vb%0 ze$nc}9V9On>GkNv^`~5SCDT7E#FcMkbhe( z5p5GSl@p^nKWH2*p38X~)$;p*iaO8da}l-vMX(WeL`O6e9Ec7w6P$?o>-++(j(Ur^ zf&#=yJ$$whAVADv5+Xo+pog~&;34nXM(K2Aj-R0Mi@U>W!BpK#Rj|YJDFHm|_p7uY z5;{j$B2LDTEO>7w+~T-E z_sIm!(jZB#J5I!Xe=q_;HymKr46&{4H1!~h7i%d49 zGw@uvkx`)!l@6`xq1Mi#wDY)quK|U)taAoXL0gs-hJmXwh~2?f*nEQu)LzqXW^-0& z3CIa}B}Q@DG-0zb&3y99#L^_HCf&=%I4F&8@pM37NpOSW@!A)9)XHJV!9uM)DZw{N zELEvK)1V(+4k+M#s`h#M<~QCcDM0h~M49q$(qjyEN=7{Ixyg>YEyD4*5yPZ_Lzf@j z^B<}d%hQzM*NKl(vr2Vqh&<6rlM-Hyf?#_GyI_f&bSHoAn-X2t;7qs&5sY=S?MjE7 z=T0KL03%$<#T$z5J=}0&dnH){`3wvyMG*UW+~vtYA2>cLqz-X`JTP+T)wRXt#=Emu z=8zb74W{W`JnLK@aeb^oZ7)xMaX51{I@!fdSY}7kY0%o`a0fMOja%MQ>o%`D3v4?Jy|fITJkzmCHdQu5c_#d-#ZYV z4}cQ;pIyR!qA%zkZk$_odihg0M7N^5F#%2#xl5Ml>;wmY zDT7Nlfw6dpu5`wEZZzNm!Cpy$zX`{2M~rlW`zo37P-}ltTFHc#OUS8onbW++hccg* zgb#x=+RNkUJbFtHb}$&0i|^djSYCC^5%Uqff`$6fa@*ardXlRCuaS-%7mFow1ejYs zu_fH)PqfCC1nLwAf1#qN2>WlDz0eBgDQCN7pWb8k`ZJjYXg~FA#gQ$2R_rUQm5He- zkLtwp${Dri#7h!``{89Vc?QtNKg$0~U7zJt8p*buT%g$zwmX&~biQS1rWfK9Mj%oz z<`O3%AZ)$_vj156OI-w zQM=D|_Es7^fu*VWwFOc~p^M9Cf+G6U+iTmm@jyYJZ}kOKOyx?;2uTq=kozr4&~}Ya^htqkgQb=0T;Be_##{Q63c%4dcnA1{vI zp-DvFyZ`wSRHkV`zvHp55XZj99f1U7EQZzKapcx+#dn><4zF|~_lGpZXIio4JFQW> zP5{@Vk#b)X)U~OOugkf#`P=*paiM+!;q_(?!C~Kh&90W8-hW*4SxlUbofN>SYBL(q zA5}YNTi|VXWPfw=mI8xg#9{@t8D*;@7tPvS>a#8z-Pcs@x9<6FQgo;a^kU0(hyQB**S^b)dk3j>21g6S%-!Rh;4A~^GN+=pDq7499qa?`zd%WKloc zZnyR4ZR((@-qvfhg@vxcxDcr|+V;_@g*K-^Mo!yS>wBoB-VQkOO#(0Q+TajDsSg6|_Z8U2B zgW(E!mYBx1IdUuzi1@xq`cv+%_)GuC5WgncPWGT(+Pf0*jcWU?w4KRNG7eBg!k3MCen%o-RQtx6KQSYKrGXd+P$eS7^&W$y_Rf|9meYa2? zR9;^T0YjkR|2IaN=TkWdr||KwHe?9kxDIMkcEshB{yhv=hN0wOFuDJd0ZuW4o{`st z%G=^jL(RRzaL^wMC_sIK+(NxVaBk2rTyThYU;tEIMNI`x>4zGyz>ySw2{fRAfWe@! z9}hGD3i|m1Q-f(B{}DsLQU8b`;qZUP)c&bcLu&q~P6L6`V8(dw?xm zqcz}219cb{t&U*+@~Y{nBMe}Arwrf*a5a=32CDr(CC*R`1{4zN791KLjKhO8Q0kf> L85sjBL(snf4^Sfc diff --git a/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v2-1.pdf b/Designing-Simulations-in-R_files/figure-latex/clusterRCT_plot_bias_v2-1.pdf index ceb82892d2923040dbdb622ec8ac816000135124..42a12489fd986f31b64d1bcc04f834cdd2070d66 100644 GIT binary patch delta 7029 zcmai&cT^K=_wSFOAWeEGB1DRGQXn)b0-;D1q&KAm2)!zOKu{q_kt&FyAVmlS>3C>T zMMMZyij+{Kh;%~d7xmotuHSpsx_ADWS$p=LXYajcf1l3`LAA-WT!ierv=kJcB08J35H+M^=$8$A|Ue!}<&76O@`qhW*TU|Uk%Mxz=Gxh*1?C~OT zeu!SFo7dSXWnjPsja*5ng?P=a8PuMIAKpK{fB9!ygtA}qS!QSZ*>=XdXLJ3@!7`ij z$>ANP=CGr|L9O_<($k~CS|B|5$BF+DFcq#A?tGJou8*G!&KaG6Pu9++Z-?ou(T8nD zs(KFkmz;&~6Q+v-UU;2t@9eI7wyhBj+SYmmvpNTbnh#0@X1$1kOO)6{$=+}vHoT=6 zc>Z&n;Ur?R`3upbO6>$cBR%bW9Ij2QQg3cu>YE7b#^a!eXmC)0UnT#+b(EZ%XbyHl z>#Qo6J8f~Dz*7EZhMhpNShq+zyAQZyHdE2KG}Q+~BW^js!La?lu)NRr1Qg61ejpfC-<+i03^Ox;sf- zwo5t)rYBmDhjIXYL1UNS>3~#VW~Zn(=N_7F&?9iJgXK_dtpCXCl-4czu*2 ztB`2?T`ex%`~a5^e%@`fV{SLe!WRd& z{ragOM)JP5H)smiwKJZwTe7Ve1nSi=&%XIpdu;io-oqPK2Z|uaOs2h=VA>_4gf!MZ zCCotB3s;zig+4bgOMKa|Du6+sSNDc44D*`*jGuO3)ky6viG#k9ae=0>)M^-I3zvHk zx|;5bG+KCzH^xhTlj(fFS3qGJA{iyr^fr=p`ZKuL%k}QCjAWp3_-#MY=FFg>>7kYt zFDHfzvWRlBY!2iFeL5_eGlk)U%a0>*bhy_X3B2R7Ux-&tKp4@uSfK1Uzv9-h^;OT5 zu5UgI?dm%h&3EjFLCHm1KXBeJ9q&Pm(G#D}SMb1Ml8%^#7pk~^kJ%6SI~R_OB6a5d znJk{nMD+)l7$1edKfSw#X-l5gb4gVTSgUOK5~t})Tl=E*$#G0rOZoKmv&!Y5_Ew@i zcZ`Nu`13*d^U}EvO^vIe8Iz%)pKp(>&FIy zE{wa8U-<}|JBZdV5?l^`597a#IEG4FGg`7+u~Rsli_e_{kIhHCsH`RmJ0o8Dj?CGl zfJTqy3DRqUs8&%l9{JtdT9i4w3H!5A_q9S0f0wsIvIG$Y$l8Bhe{r@fR~_>(M==o0 zX}F;_z>C^6siR_KVQZonLF3yqZok!CQbOn=&}`w{@3azTJKuah_74ADlHWk_!}7m~Gc_hyum~; z-%8v|WR2b;e6q4dQc!5_>`V=vx0T}1Nf)whZ=P*N&6?7azS!q$C!fL2{k#{$)5_h` z@TIpk5~~-EjEPb;rbK_J;gni!8nNp7O2|?TaQO0`W+eSL)0x4anygCfRu2v0^V%PK zif*Fs5AFARAe7b;cyqY@MYqOO5=C;R`mwj1iM2&T%X%N^cDM~Wz;EA2E`I&)8g}cE zaNMc!3oQfNZR6FCQUhxfe~;?DG`*A0yGu!XQ-L}D3R6$xW{4QL9-3~WOlH%gDepW1 z8UUbkU80*>Dm&6*-mIO(0mD35>$R7naNJ}2hT0~!;-|FfSCM=()lP3TSA%F2wMJXy zr7=%?F8Z#$!4{be*ewMP~Q1pBfqhGbp#(DKt&W5*_uy?_r1UFzrxxJz%Q zm|4C4Q0UFz^U?n3h|y&4%K{UblM~XoMzju|tVs1e%l9=fB?9vyH(w{s^GfvQ)S<1= zhl(*&l#A7qnE-jHtZ?}(KW%mj*>(M5RY@aGe@=|;>g%oY&Bi3)gHGL@ZKG6cK;+nQ ztyMy2r6`~0!c7PE^7G#=wHoBfvhZXCDYB3@?8g4k)KuR~-IRf)`?t3I5oXuSl*Uyb zSEPraU2@u9bx2`U_Umd^Uzu81P1C&2pw#ax+x}}Byp0Y2x{@aOw5#L2M+lBOSF@JM ze#%i;EtR3S)h))A5rM4S;LWu}0*rXE222qsu{g$Q9fhn;gy&iWUNIEUO>hC(Y8l4~ z@n=0WV;asa=9k#wgce6&O60FD#O0SUi-n2ODVJw{3)Qaxa-QQvnM7!Nx2*mxN%lDE zdNxk|@L(+ZsoY~KMw!t*cVwQ?uPB(IGj;*^0sT`G$mjM4M@AU{v1zZ;t)!!8J@cr} zg__{GCs82a`N8d~phP)y8jnX0Bc+eqnI8cZvU#3h8LdMY(49wN95WJSy2i}PESV+C zjIU+;Jdc(lTwvgpdvuvOU=c(3lNxifrFm!d#SD%N?t*KIB6sr%T7@G3l7q5Zv(3+n z@w+-#H{Y*_+hq^Qf47R+Oq@bI*!=tloKf}*)As?E%c}lO259^PD-XuD*zw%G5Ff@9 zL+hb%oJ6j2aoqU&g8GiK2&A}rl+EtSuKF60jPEe7rRc=g4ha~uuPued8YYID*tf6s z(nJT>mWagEDb@jzQ{p1zZ_H%+-H)te4R?9X#*IE|+EUg?lS$t6XrtTD;<+OV+0~@w zeG}JxsWs3m43a7^a<{%5o9?JMlx9IRW1loHA1+RQlv)5aUyyr0n#$P$4z*Uf2VH%H z=_IP^7qG@viel5nw;S~ocdWTRC$vshO}wxs}!+NRIgS2eC5BScj--yk14 zDyn_Z1&*o-0+#PxL25cxK+F^qUUlt^67N9W#ziDm?q{5<} z(olFNEt=yvl3`XqZwo0&FoYCkzT0fP6&EzS@V5l$9Io2_9nn8xqD2(umT!KosjKcB zu0H#cydm&w)Xg3$R^1OKi49VW*UtKCx;DN+A4~x`=Vq&|v%ysUJzuMKCjW*YZ?rXK zR7TqRLkWLd*D4)bRf12=xawxwRZObO%{};)q^$+1D=e=yV9$Q^$_-il(M0SQXZ|H|t=3mABEY~RnQ@=wW;HEnl^7^k7et<0^!V3@B`nt1x zetS5lp1S-^I{{>)lQ5X-xT|jBYfK|5m-XN0j<)2yajS0pkCNm(OY?5LSnJWy8;2{6Dk@u6^P&FLIDQ2wM@z zN~-YTK6%W=Ct`DX!lehvNqf||t=t&+xz+TXMKr9yx9(PfUBwG5Zv!oEQ@>Xa^g|b> z5dW_G7H@n*?l|uS+wxLJPLPcmtmk~!h09fp-7wOo>Ojy9`n}N)Qdzds4SeA9yh>*)!oJJ-e|5uE zryH49WzufKh0M1ySF#uMoCCD2E4>M>(QbsYnqfpFZHpUr=za*e^4T*WbCq7RVOHSYDxhy&Ep#l_O^(0VA>;Uxk&VNBSdZ_~ZETPqGd|r8F1~ht zS(@*99iuRmU)XNUHnjcgtAT?Y=21Rgb!2<0?OUPR=#xtQe{@6SXYAj1YjGrY=xx*< zdh*=raa3LPnVipXD~UeJVy)aESSRZl6BZzw$xIK_5QO;5ala_`xyN&ylXVB7^Seij zCQa}Uj@mtCLC=~OZ5OzPE_=Idfcs;e2d$*{TQnLJS zbyUR83PqL&=~YsN3=Fb>6htQp^)%u&rF}E}eYbEa8Haon9?dVcKm4O0C~ax-xV^xR z8VrE6wT3@(n$LO0X>zktOo#o}BKA&k5n)h>Vs?(YA__W1+E(SdOB`x$r9_O{C)+2a zrSpR
    ~jrp>ccxM59-qu*%2F7jl={~-XEk60^(&tah_WSZ7sl73{>3@}w)pdB|| z!3R;XBU$oCRlcLvLauT}Wq{T&8Gx@z2IG-F>CJk18UJiIEVDJhCrTUc$`WNU?f$l5 z^*3#+iZ4DXCc$=KDeFzj*%60K;ZLdKNDH`$HmrIbYSM{jkJ|9D_^QIbUb_BJV}9~o zH1d3CoGPt7*+_e8I{4aM16{nQ%4hXjc2VZasRUQ{_Fp{12Eo};_gV*W2JED zsjS&3iQ%HihK@$@5B;j2to%>?h>twJUsY+zY~55a^~UlzN8sf!$W|n?HKNM95v^&G z^{WA9VSgH+bz|yjV>B5lfxoJ<97OHD%uh?{Tv&MFn!x?dHKAcC_^n%l(=0->IwmX6 zA`s{$vBn~aHT;a@G!dGW#H_q=&0Z3dEEa(b{iJAca>eRHTqI^mATdi9dTRfab62&) zM4eG5rd8DkrLyc-qGau|U!OH6)w8BCg7~PY;e%N#9%=W=xs6iY`eqAjxWS^o?JBCF zAxDd0Oe>4Gz?@P3qatFtp1{5B7_;Z!uALSQv5N+9K@8vH3s4_^GPex|_w9x~<3Zm3 zu_#F^ll2#H&rQ*OqH*-_7eP2-Yj0dw(T{&5Xa0@H%55S`UmJY~e+Tk4i+5b2xQosJ8{kAO zTur|K<|lneyusBN?*uS(hOCCXzIxJame-N8F3RfI4=OJxtvc^D z+z=pEgbI?;_lm#mNE6$8Xke_%Xah@ZMu0&|tNfi@)G(miGSeSw zXtYbxg}Rky?pG_NQ5G@~L=&5R67xK#CBud#wUO=^J}+r2A24C57FfQf^Dq_CH47&H z)rE zrbOC=+4QvZS)uuS;*HY*amZv!fcTxThOjN%ozPAC+S$PgGa&e|5O;_l;)x5zF>W5k z0RCL~u=kuP>&fATD&0o8DRcgJBs5Vp!qaZX#~rboj>gNb#inaKI_<%9dDEd~=_3;?t6teQS&OiB5KE1FNq zZt~}>99jrA-DF%4XP%aBdftd849faMZhX&{W#g#X6lcx>1r49p#NsLfm-U#>tbo3% zP<(pEoM&A?m_}Gn9A(*=6Xr)@XUpRZ9+>R5bRjQEPdG7#Qy$1bc#C7Gy z-nO+AucnjLMBrdeaQesDZktT=39tjEv4usfm?@V~NPcM;jIWBbd3<)D$=$ZywdnQo zNP$1}mE1Ok$?0<9^vUjH9$@+`(B@Fds%uOk+-EtYNPBu)C~+JsyD**L{RG&XUyiEP zrOG+DZ@qz89m`r(U)6SGa2F0 z#WRl2ru|80wr?@=qEJt17p7bLBzykwv`hK;b#vXBeltvqb76Xa0XO|!5Yv#daA$uQ z0_+m&?q{D8cL3jl3F;IoCP@JpR2n8J373QlQ$zH;9FcbZD6kMx9u9>gVE?(7V#g$L z4xW<4^aB9PNke=By!=t_C?|g~OiBvk=;iN@^7BJOoPF)?p&)h+0sbh61KQUiz{ANM z6$tT1yE~#F9(E4CUY-zpUlhrQ6k_M#fb#T*IHFO$C_l6xM8ea{-x1{m@kM(&LmW_! zXm@wJ{}^2YJe}=)13cXASOfeaUd~>gC^v|M-LD7wdD}UlDnpr3H0Ka01a3jG|0FFA zMF>Kng8$FD5Ihz_2(XYb${ehVzJ~&n%9a!cyLsFByZE8(!1qwTerPXGu(X7v1R|w| zN9_U(3Wq~ePI+Vi(v>U%4E=S0J;Bt!FQ76|81!E@7y|w;8(d2E-!|EQ`yt?x|K$gR zN=lLZnEuzoB%>TWr4y`dD=VcUi&U3UL(0iO5$du~4HZch4H!~Q8ZM`*t}3YnR*?Ds gx!8b}l&Jmu?R@W*!i`{2eh_r8s3jClb+U_{7f`NfnY0&ojGS+X zxTRo66w_7{X$zT=z^=uto<<+)o;2_;&t_oxH8+-*| zdKvvG`glQp=fm&8)9CH(x%p8bD)>)Z)V3uwc+Mel1H%|KMAOTgxY3-Jt+qEhLL@%- z+Q?PkiHs1YG+&QXpsXPjRMtHs)IRnEa9aoO-v^eIq0t0Z7+=dih zcXe4;yJk(tF9Qx^aOC(x-m;pZ*>Zq(6#zFw5!RedF??CXAX@M8Jyb{vn#)EbX(0d4 zk@z#N@!InW`q$O{LZ!L4Ai+K);zx92v0<0$jtTr?$;Q^vwVPKI8&7_p?hbvVjA=Iz z9j}Z^>9zY7nIFBGRNvq2gY9sLSy5|0n9UcwOc?~3MMUNt(k4a8YQ8+6MqRVh{u(0D z1?p`DS$2!G_|)fef@clcnDbomTh+(#L+~(_FX=aNGx_4GO^F_ zq{^zZN%0I$ts{aS;HdYc%=*OzR3QJM!#k1fyX~aXXAW4$Bk=Nhu6IR7imJtGW~O{> zkt<^2)W<4cNrw-Gu7YY-iU_YXmpvbQ?rN3WVo@(WY((FxsdTaeh1ZhWo6{-03tL-d zgk5ZdU9X0(bN3KDTl0^euX{GyFK$le(6tVTTTSin`WETA2lzG<2;y44Y5u+$CLg*d zxMU#jCX8P{;F_{Led3qGGG%tJHQG@ksZDG|QrQ(DV!nBfQ>-{W`i3~g`5`N5-Ml7u z8Eur?ptUT0mCVaWu<-hz{b{|fcr-;=*LQ2R@#cs<_GWyV?Md>FRR80ughZviD_KnK z&a1Cm%0Ih2A}(7|O*Agq_J!?vf02Ol^p)LrJG>#?>@VFx>GXzk2;Wn|7@c~xz<&I& zgJ>i2hLHW#@B7cNxzPI#_HhnYW`^~wQc+PTDbJo$AG*wn?heYpK`Ie1_Alh>|7_%RCOuf??Cg714zq)%AZPj1_zAK`5yI6=+M zG1K4LRiiJX-T9clq*PN2jNs!|lWBBjk~8uv$AlO55pZNaTrke^^kSw(*@%}YFdS7M z#O%a5^3}rW(KlYJ9-JB%SBUwBDecW|9S^2taQzWeEmBwns6F43$`i5Dm_esX?mafwd34P_@7qzn>X5ICUR6-FUPEG)rK3kYmxkt2|c0V6Go1 zPW@K(has&+8(=Kkva4l`<0a%!w}b)o&IwK?n~V{-nEbAdm!#Hz)4wq_Zjd=SUI(lV z^r~&GuRRtu(#}nrw;1tI=JcL0rqRR>&3p`8rBH~YwDxRto(Po670E)PI1K)7x2whp z#A75Lk!miil)+erHGjs}^d&c&1<;cRDe9%E=|?2!LsgO*LiQ-D7bc1>RA;I6)NEaM zR%G$!vr>Dtw0)VnFnk&*m0M9T!jMZSSRh^glWg9|E%CahVkl6G{T7LZlwoqZ2R(6px6 zc7aoN(=E&#h>Cq|hl6{TBfr@!+Mgc*4#8fPo9D+0zBZLc)zzg_(aH?Jk__Il0=>3d zdYhx&Ms0fTzSWxt_j@ya9)(@f>^8H?)<9s&O&wd!NqQY)AYk7qag>VxnMG1WAX|8tDKX?{?)ye&djCS-#W zsS#3HuJxQbE4%c@dCkai_^PXFzRUV$^+mC#fAOBEr0 zUT}A*PrUbV;z`XtgU4^>a?k|(6pA*EM=;%#roXl@^Oo6^rTM|5_tq8@tXb6oHO zS+~5Ig{~|Pf3=!Mi{}EQB1Buszl?4 z20fkJ(6*8xQn(D6*yMVJJ5r92Pf}+~PiPAi`=G#+vdhaL;$r@mL8MWxA1SF=Wy~M{ z&CG%}QpRc8$L*rS+Vr_dtsu=<-7JZTOzX$B*6*{*MSmN&$~E--8timbHA8>S&!zi} zD#%T4Qj%qDg37YG1vrS@4ssBA;y1V!_H&~%rg~#gKi+ppU+XiD)Uj|9LA)|kOdeB0 z9-_WUK~X;A;9*Z75&K4oI>`{$z)@fx-S{)Y$|X3V8apZ%rZlb(PcM6?uAvi^BqiKD z%9(wYXH@1OYL%N4tDoRO<=ub(DoW*DgW>6hrYI{Ilf?-VUVE3Q;ZIr5IZn7Krsb_2 zT*KLf-C_k3tkVYQGQKsHj}Zpuo(-_N*y=qcYzpvVH1&W4YuhO{5rQ_9Alhj{<7(S7 zVZiP%;r$=OZj?DQ?16Pb!2-A=j=N9m+_4TwPg6rMN|8FTx(Bts8VLYKf!Zp{&fPZ^ ze0nWIqTazXq;7Doxty;dgrDNfhycCyJ-UA-SY>r|cX6out$Pv11ILi1U*$$!jZ zz+7y!{R_<~(F5`EvDJ73SsRBKRR3~0qbd{&dbdtI`5ES?A3u@9R+YqzKM-EvSMaY# zxpQzc5!@=_nmQuz;jIl9`XOe~`@t4w!=jmmVG-_AtNC;<2IpF3SlOE-rjxV5z_i(! z4-r)s|LDgV10`su8E&$$JhSn=b=W}=*YMS+AK$}br-ybrll{iW3Gw86z29agRhr+h zw}S9@-~V=lnp|Qd)JHRh@jI;XIZO%9N=$yr9f`PAyji}!+IPQ~@M7BY^Ei3YN#sRu zAnT-2mX!qAgE@rm!8+I1*RE}kznI1N|Im+L|3TKt6zT55Kr~TeX4=DOWFX>gRTFF| zqnP^rs3(;t+Xw{?~?a>yzQ^zpu%)sH?#A= zSwOhGkJp89XIQ7xx|Gt6K~U7pi{MkgR(ol*&HZu5{j9@{8Wj3(u%TzscC;C|u)7w7K)l z4@mjEuhYJ8k?g9Iu`Vxs|8`Zo1bnY;Vis9Jx(ypC<(8^(J%Yv%rm*WRv+>k7kyuccZ6r(5xu?O3H~RC@;-wyVTyI za_Pz5guPk(^f_=(B6EU10@rOe{ZmQ5wz*HUD%OlgtA-gPP<^bKpNJrzHn^)Y+pbqU zY>H~g|A)i);)Mhq1TNNeMSau`&Yl?ZSvZU0oun3%JNfNgPhB`Bk2qpt2Q6BjRzG_Gx?Gv2?a0cP=F2g}@H; zK*OQc6Ma_Q)ZBm$Ssd}t_RFEG;KTI+QUfaD*GzHy{Nb&cK9n&C6%jGVG{bLkDJNVW z1zLi3xyBJ2q-S5$bd)121cyu5g<%zuLE&Alp?m6PDC@gCEVR#)lozZc$i>Ev8iX92 zM?+*$Y_`yVud@LD_i3lb%fRq>k+quQP&<2K##L8eftL%{@~hq6j%D_mO*aPbLN?Od z4jTyUO_@JY`%C3C>^cQbp*@t;1K~gXthP+u3%0F+`P&-?%c5ps2&UP+?dz`yal#63 z&0l1mLr)Fy?0-!zw5(45bMv*_bqE^&@xxZ_b+pgd8K_|349V1^+qZMaNxm~|5RoNx zvXn>qp$t16#)^~de;(-FA*=?{e=e?9Spjc(hc&IDG`bec|M-k0SRW0f;Nw^ucM8w@ zJa8}`v=jh!ycyjMMPUhb)?>ia=bvYVyw+LneCImGBC1MnZPF(mZQ{bVs{X8xtvRA1 z_7|IrU!E?Fb;uUmf+BZl`=7-)`O`XZQ~8m8-Pjw*2xoq_wgSD&9L|v!`spw>^7j%+ zDCTevHJLj_GgBB)^}OV29?i@b9h+?2%KmqJ2&;wK_Kxl6GSL&dl_jK0*PMEWnOMJ! z*M^8$cL$U^JS4MEZBq|^xFz&_2Eo^N9US$0cPDkesXlhRpz9mb`-SI(q{rj%s$dJIMlK!IoF9xkb z)N=PtgiSTGs>0pZ)@bhOPI`_zGi=3q*HM3>e?LDqnv?tX%i`ctam=ZrV{C)2A~t&O zyT{Ar$95+j$IOL=OG+3Meg$!`oDx_W43m=wGRPVR+}3gqb^}OismO!nmE`_6hk48o zqLjms_}80nU21s_0cNW z3jG(R3|06S1_S?#r*u~RuhRcF2UJ-O1^~02q5lLSFife`b$e}fsJsqLLqkJOSxx~8 zQ_z9PX~48KG&Ervnp(B%X&7$o0>W#C@JSL64fr z& zO?_5OizwTmWe^Z@Zu&2SYkha1Qj)Pi&+il1as9nn&P05C-p4n}Gv3h_?R7U0_p}S- z@&rfj4oRD&pC}y@`t3GnrR?RoP8CXy^-dOgH@iWkcEaHW-BqJPr=)ex6On@R`}lre zRQA$xXQuz*l19C!N#D{N_iujbd%b8~fV;N5n5*Z(?vhpcx20ase5!bpFhpH)qVQj% zRK3AMz+;q3$yynkd0cPB3sz%e0_=lCyS~K3&wYWM>^nb_ zuNVbBtZ%PORAo8wu;}|q*CVhsR6p{)xovv#^^|gi}NL>Hm|@xEm;nSJjKk*9=yJ^ZfrTX zQt)x0l+tnpW*Hjj@VRxYNZL;8N6k-C(yt3QQ2W=oi@E(h`y{K&^qwpT{vL#b%$~1Z ziCX@U@yHCyXc=(9dY5SfPBccQqpM=)qD&3N5b0~pBPr+s>}UCuWYru5V{<`#j^#RWFy$jN!L?IW6<6G!X!9Vt#pN{h2^)h;8+vtFsA2-XThRSE z3(YE}5G804c5W?bvf~4yq1xX|H5BEzyZB+S&;~H%yJ4<93U;Z=w^^B~SQwqafhlXC znE{9DZo5{bJ5TxO%+2SE(QYK+V4KqDB_G5FS5G{h4Pz_s7D+)1xIbMNCC4N1XdgGZ z+K%XPsg!jZNsU~eAYd)ym!I2LqAozXzhL0$l}=XiTAWq=4vEw+?pX{!LnTMNd-Q?7 zCs9jamtDZ%L#>Nyo`}mbjmN}TXr-gjOsuX2VpJ|a&T^Qe`_h?9MY~<)ud>L>o?P1N z50N&F6T)Zn3@Xl5mS2F3s5#r@y$|R~T;9o*7+YC9&uw(GhKp~``qKkOh}rz%%IA6; zY=>1?ydd=-ST#}PSadZxm8zwJ)}-KWsqBYjKc;MCu>c=EH6_fMT-X%4rRO~Q_5e6v z*WzPQQM(gHvxkAzYAm60@rO$`_|;{E%3Xe%WOYTMNb~zoas9H5< z3Vf4VZx$1som>_$moM{tY7o2H$Gn)u__ESSj*~uFnY>&Mxsnu>*Ynnb9M%D?Zkcjh zGbJ-JcQfMm|Hbhnv9@|tVlnX>agCcfV1xT1!hzn)=xcn`_|+B{@^mK8o9CL_SP|}0 z(5rRDaa?604}DI>hy05cn*G{+i1V-1wd8nWsm~z=w-<%coUe<2jx9ILlbX-I$ z+_l_$!~#%c^7<$zyJWM0kqNO$k+x{(RN}oY7`Em9{?SuU%ke9ctpu#$Fv}(S=d+oe z0?DEaJWd0)l}#O*A?UZy=~{SCB_$$%fVKzySE_i``>Mp=@LsA{nE|_m?~jsOz2&H>0At#NS5^jJ>`zf_veIk0 zaGs8rLB)mR8#A7X!@Jyoh-0I)_Fx{}JHZ5g`U}2E5j^%=A0r0ZKz~PZoS5q}g6Vm7 z+$S?|u*cKd(^4DZ{SD?h);h5QaV*Mp>Ww7#>j1M_X`l(qP#=f;hQsN@^_YXHW#KpS zdb3OSn)pzwpy-6Uov=!xo8ll5G4XMbXxp|;w0(zPZF(uN)}&!D?i#EftdYU|7<+-* z0Iwk4Zax2PEdOChn(*+1+zON65>@3`Rv`6_+04(*MGD?um56;!zLO?rgEkL)~sa_i7{luq8e1?i$L>q{oP2LA)}2DU8&Wk# z_u`_C+t9FgHH=}h5Vo?+OvYQxK~;}!S3HMlGqD4?%fPkjZV)lGbdf`Zw@M(l8|@QR z1w-XK2-S2^`DxW;g*ikazTw0Ym^^Ub8thtVgR20f=&ZU5T$J^f+V@;-7}t$XJQr_O zt@Pr4iCz4dKuTgw%%q6L&NZG-QS|w>P5YNYZYBnph<@_OPNxRhNHgYq(Y%W|IAuF% zC7~vdGP#_ZoH;5|Ok(&=6-+!5(LISmPufaaMc;d`3xY_aX((Z5l5tl1P61Uv&fR?| z;c0>nl7`MjXgrKV+x5grEx@IQ4VgO(824q+>7LYprHJ&ngw~jGP5!T5WCkU8EVMgX zU;h#isrw*(5}|UBE6G1yK!E$clR+V2qP`jG$4TZ&wi*kR*H06l_dC`(d9^f@HTIbS zdpE{g2_%7&YnqqH^LLjBT^^F;PQEDQ%|Tq0dMuQ?-sjKo!ou>pdE{m63^>2f z{~ub$mNAvN2t$z(1~c@_3eQrxAXsljnf{98$IPsJGH5T6*bj}cBuTsxA(%pbU(w5U z+6UDliPv(>WnC)H^@`7EKyQQcf^?U_diEc48|6E(iJ*X3WS z1jD!Z%M2i9pf{Q104hCMnu@Y9=Be+=WmSS#-pZk&-xy)i%EYU<;UNH-{CV)pmV=kE zL!yS1@|E}wpp)<2UrvZhh$#Wdizc@OY^hEx_$4`2S$9KS_pbr0(j;^d)%ZrSS z-ajXAHu~&OGB>K9rP0nT(tGDvgHwyGxynCjtKiLw&h^EFupc#_c87XvW*={gYWG0w zp8GBa4L=G04`?tw&yI6Y0|092o>yR85zT+o&9&>*E6f27gN9 zC55r)L$`IHXZ8oyckdu8N`Alq6*uet+zW(+?Zt$39(?zOQJWY;P6q*87yI3-T^1&h17| zD%c;Z47k;Or`k1nQnMd02QCSdZs)VRk|Dny53HI}z}4#&rh6J{@jpwpz!D!xbY{FL zWbDpPVd33MmgYxSUT0`C(%}s4rH$JGTU||C)|(o9K_TRX`siOqDKwP1s$v&gFnmEj zIV&ZN&y=ua3TIJW0FPxD#P*8C^%Zs>Iy^$NM6pb$-`TFww2qgoGBQ}YGg>hHwfvo9 z6^=62h8Qh3X?uKC{vQU2>*MEbtLhYNwbDciA<^I0E(ifLS6i*gNsiVewAY3OJ|S0F zVazDyz-#}uY1Qk`R~GnK0Xyt&HGv#hCD2BwjC>tO-w*Tqd>f(nx+#_}pvNgj$iD3tRo>KT24;P=sp&mu2Zko4g`4d%TjD460JvZAo@ z^hR4r1C7NaL#tY<*q;x67m`akZWu{e{}#wZqmG>h3kITU8%&Q$3tvp$O>qS%Br8wu-1q^J-Y)2=Z)i$-O<1 zI-T`x{PVB6Pxn;u1(80JbuapSE#`x0QL2)Gnyn}4ho#J?>^hXNcVHdb?1iuGFoe~EkpWHjx{Rf%9A9T)Skk#UmRZqKZ@>)R5J5lpI$O^W1k@sTv+H%n?ou80W zcJ@_c-rYRM5niW9%%ssS9Jt%1;f!`ZJ)hlE?(^9A=DJEE6I@8-_O0Q>KkPd%?Zt>E z3kk~qnoFdp)sn&)heCOoeAGde65D576YtjU%&P#taZ zwzVNLMV;{>KH%^VA09^H6g+Y`UURm}S8{YI4gbrC5#&9gPBf~VtjKdc4-K{fapfOLqIV++`*{td#qGY&(ChbMJvjyx?lhyPt_U;Lj zO|tnHS-+y?#jCb1C)_rhH+QhmmU>gvkyX85I$}%a!7Yozs9G8!?;NuJ{{b%IC$xE~ z5!~(%i`br&_q1hO_SnPJ3+OGGG4|ek{ti5z*m<`AJ{F zy2H^)p;OWU>k+KDGB@tz%h0rb)9zCpjQmnWZp!k7#%UDhmF}zan{=?9-k=t3Az^tO%Q{NTXmpSkUBQYHKeyiB+Yiyo5+rDwz zKKW$t5!s6;aNyIG78>hp4uY8QAj~2Y-_2IdEN6PMNR;8CWa0uekS+DY3tp(E)Ny=A zXbmCi-TFefr~N;WC2afL84KM;HXIQetEBtSwL4WEy!OP4u5?4&>rIbl{F3`JIH;Se zzg~G}V1Nm^{#569LptvUnGjFgu6c#`g#}@gi~RSeE%4R(Oj>6uGRB%MOZ#PMA_!pK z8r>qw^>>UWFDWq{g+2Q`u(*2Pd%JWVQCR0+h11SwP1r#*lw36~lwHwsXs@ph5a;CU zk7%#hB^p<=b4rK()7w1#(356bsc^z_mdj>Iwu#Xx!`G4?;#-NwHgb9RQP6|(aU8M)10 zd-ec>+ziaGtN-bm0o9l0ay5)`|LZV?@!!=r!kjb8^k&Cf9!z8J;ZP&e`ecszeHR7k zgYU3+F50}1V{2B9;G0gAs7;8xr}`bt5O<$yN+ZWpN8p+-EY|_oL*wIi>;@cLf2=mV z{@7$-{f@W+l3Iz`f3{oWMZxwHW(ulSGhl$iV`EsUHM6@Uc1>o#!Pkw#TSS!NBwt&Cid8B+C zb`e>{$1(oZ5~!q*H)hl+o>%YMb)qu*$W_86Rp2e{$-x-&>G2wIXQgO)xhxfKhpEpF z4mvnm&vY#!*vnXuY`jzfs_aQK#u9C_iiblOlwuI-^k?c=d9Xk6@Yjy~uayBDzTCEl z>g0R5!Fs9;A8X_>$MNRw#9kcdDI#F2KS*<{O4F@|ZoZ7YZ8@aLkNCX`w>p2&6&l=} zu%#tHI?$kM4zr6b0U4b0Z+%8^Lr!zvU;F&1tU-zjX`v6FV;WnDsT}gWkx>-fcn;N{ zC}WGNyjRvCz2lx_>8EDBW#+3!*a*}o0g-Do`ZLH^YNlD2i66IHrHXymy_DpN_q{ox zh87{M1InlL&r9t>0RD!*!ZnMh*Qhu>Zydg@JojAf_RE+LyD9~GF7s#Kuo|w6Y^3)G zneJIg%W`szH0W~;7SGg#lg5i5Osn@pO`A|H8bv+%gWdMu3&+^TKry8=VN;e*%KDY( z1lx-PLStgTyG=EY+%l`B6m6mqBOT@#jYL$p7mvE2g6G@b1w4!e#;n-L568upo^U7d z#R*f3%M~REcj^sgIdhmDUl0gMBC?z5v9^yCqH*}Vfga+2yvXOB|$NMX8`KKo9PC&)x^ZC3sIxQd|-w z@gHB33ac1}WReH#8j4F64Ddw2eIRZSCm${eDJg&>+{X>#?F|Jud)Xr)0DA|74+P)< z^>RSKoZKLO03WEEBLo1mckqI{10H)p$T8$0_6`mZcOQTw6ygQ(hI#|U+~Gcs5GQ~a z)ZH220C9x6x!M26_ypnZZ107Dx!E%zd;oA~xI4rZ;9!3?p|^*<1Ek8D4MI%;O8UYt zAo8E0Ma4k^;^G4T&vh{gSO|i+1Wh64T)I#sg#0r@5-7yw>S6Em#2aGIg@kx{L*edR zU@@Q=C@Dn%NFk9FEXaFNTtZShX-j}lNmg8(OZ@EMa_6G@eIgD9OM?E(CJmPUFPjWF zNkx!RN%p@y(h@+K{~IkU3;Mrt66Ac+Nri&SwqR9NSrAY~9VDY6t|lR=29&vXPfbHj uMoL9iLqlCniAzEH{}*DzrKCjV?PKrd*T<{28l^*PMr|b#E4J7)wQ9zuQmqwx6SJt=G+MLNDy67hqgAyjF&d** zP&GnnY<;5N{;uD1J>Nf``;T1tocny<=g#?@`@GJTR>RJF$=uYEfWKE7=S8j*olDHz zThZ7DL%I)pbqKvR~@BA?;3ejeaUUJsSoSC6~;G`w-JJG^~;{llbP zbFcRqWN~ z(zSkic8WUdX2x)~q1s-69N!q8?XN4H!0)Kfua+avqy{m5rn`IN8U7x|1M_*6`R@mg z5f;UhrbX=NI_Z#*Uv;JrcbBE_k$)-s*lc-6`x?|a8quEq{H&eNm@rOP(Y@OK9R7!6h{L zhS9FA8zRORTyrGL9Y^3A%R^3RpY+vz#sZ#w3G$^PYLnILu15oY8#T)jT}tzGq|=sIo1& z&w1om8VT7|l=Nf3Sh6r%E$x=ml^32ocgu|tAAGS5$re6#%_iIxLaq@oBmf-}eQeR1 zZdJTlp>QC3Egfc6{07UC+LngxlA%zc>ouzMa!0JOacSBU!tKM~c`I7!nhUjRy#Z7R zxujk~3Y_mheyJnaA1D~#geT9sgD;`0oln;CqCey7{3mp>}pxc}2qQNv$ z8Ua)-fB79_Ih$hq3_|<4+t>r`ma*3UCl-#Au2-EfQipr-HCUu;{Co!O+x(9PuEV); z45#p+0*;XYu4ywj~x@)wGk zcL~TaVeG-ai0Xt3tJuQ*`hdIcBq@M+%PN-jz<+&# zqXFf)H3-?@hXZx8e|J!O>Yj%%e%sb;ISX7}G*{;7QTeA7xKR$h-WF3A;9^`NxjZgc zEu83Y^FAx+q(6+}>5GWiqJ@jaO<%962^Z1)PLaXYf4G-kgT*t}kbR!;i4ulezUkB$ z$p_kYm6Fu-@beg4zv}JSW1iSQ$_2aNv)-(X=mNPXzNUum12>Mvczd78$V+Woug{fRX~+F zoJ_62*10B4M4iLH`C_3moCmlXwOUM)x`C&y?&V`&=cfCG2C6;kiV6zBul?G?ZH`}R z6>*tpb4?D8KDCM2>wlpNT)+8PS>2Fjl4<+dpZO1S02(tL~4(K_#jgC%Nop+byKm z0=7xSdfT7gTU{}rS)0}Wu)X;1ErV^u9Q(^2ajF8^P$WlQa(!%)SbTlz(RDn@)I|nG zDBmGETwA#6CVI%QJM}VnxH*Ig3gt1oH5o=}f8r`6C!ihqELvqTOqT=`JBMOVfnw;= zdVz`V5>-8YMX7W^QYLTljfJbFw3|>E<%~_fqc3Htk>e!8a zD$Oq4!kdXtbfjaV>IxS{5NRe;dvpQziDx#_BHi&JUt(!Xsqhv*!?uWMB=u)P+npVrGJs|)G5 zv>1Gq&*K|aTI^hl_NR_no<*(9_agRt?g=}@^0<8ZuP!KP~SJ!Swn(#rx$@JwLz5HBuY&=yVMtjlE`>l zj-(mjrMbPLTb+bZGD2Z&frKx??sz!2S2s1!gj<9|B_JL|$I&~S@_sUE?XiQGa`t83 zEE(QzQ`a`AJ)9peqEFF(Q6Y$~jU#6@*~jUZ|Lr?e73r=oJn3gOlp(HZdhG&Prz-^B z2k{94w+(eh5}K8jAw*z=8b}={)Ek8d$aT3+rB!+LR@y0xke3Xr+6|gYl`@;O zEbM8YMeOqZgNxpR((5(#m!jMX^|R`GOKj2yLu*BtG_o_FUQ#gyrSu9Jwg6+v9K=Ao zH~?ASM(>`2#VUkuU6t4=3#DN6>?n0~#~5lVd1}6p(t2nX z*tOiD1csl>*}mvcow}IP>Y#3>iX~sT;KP~$9{xi@JmandYHEmN*F9~$5hE<*ZFw;R zoD^)&a-PhayAH@PT_0?*vfg*M&}qiM=}5{f3DRa2)k{+3b=NFiqCMX-daa4_)aONQeQo;q&jI3DGaa$hAA6TlIG``YgYCC0;P=6vYA7Tl#pvL;iVZprP8#NDL3`znK%nv~4uyJM0i32gbAB{A9R zZzArJ)_!P6pEeje7rX1COYf1ZK=7DJS@=fi;MKpmn>{R@So6xB{FSu3MPRn$Lr#xx zEBikgS5r85N_zjqQr#9y^7vLvjfveQz$}8Y9mg)OrVL9b)x4VExK`sZViB9{%2FP^ zHGwKrnWFgTS9Kz9#}|vA!b+@(Ovqpz{O-Y=AZzls?C^LAR)a431VxWP9~>#=TqJh z>vnCu!lKJ^)1+T>Ut)>p~lQBdpITFGX)QB{NiMl#Y~$k+;9DUV|2+Xmp^yn@M}xD>P!NtCiun9P|yB zbQ}A(I;TvXtefZ?-@b>&E&NLjF15xK>9B>~9DDvXXrkxk(n@fgvY6?Y>83M5s@@ro z8Xji|acwpv@6V$|e5?%mQCqbtw&&@;QVZlBjZa={WHqvNM23r@O072+9Jd)>EeXP$ z2qkXTqZ`rT{e2{N*}P`+^;zq47}_m~1H6b%!3JH4ts?SCs@0tF%wshFMBPhjFGGz!rhPXt%VC$Je~dXRU*KepmB`KWgGDDPz5APXG6qA6V#Sv;9(# zQ>6o8g%?5=g_~43(fw1KInv#-oQ{@gUPQA&7H>T7d#if5*o;qu*0UDZpKHvB+PR0Y zGyyf6Sl8NicwB11lNSZ!PrW+i5Bn$(!^J$-PAKQq{s)sP6J=P#=hnBl9=p5*mdAgm zoz1V1`omGVgTeZTYT-))R{{-c)@u{$ZJnQ$$$`=$FALO`FZ48~CKJ|R-X#msfE%x0XuVKR8|4`vl zP2?Q+${N4xZdgE|dMMdePbgqjulrA%Vz^s)ekQR`Q^zZ>Sown0d}rfrCSqK-XF)ym zp+qof*P`xsyzgqe>oGIpeazHICUie{H~bM0=Cm{Xt>BX`oXfeS;)~VNi;llT1+iNf z+vHFVSnT8!EG33KxzTvlMu@_VNXEzt!+6frLNXinW?@Aw@7dEJ^}mw@5htc6|wS!zOaG2;_s)^_kv$BMoL$l)^QD;FIKWLO=G01=sm1Y1%}F1vx$pgAIAY_Las;bZbj9tiJGlMGZz9S z2V;3S-h<6SyUt(XCa=^5;PeJv!MfW{*NbUiMTKZ0M-rrwsQoH{p}eHufW<~yRO~qu zR!?G=RuTigD0~Val*&3xJ}t?AEX@$Sf1YU0u7Jcjg&{-QEstD+chc@juSU}Dib!_= zAFD_1l3&v9yON;nl7pEg`EESN>P@A^_f50g#~Rs_CmyEhW@MZmx$4`Dt8d&ui}F8j zVwES#uA`FHT0bfB)5DK*rM;Qkqb^O){~ZtX_&wQ0kqzBviE=?0S}2=TmJ*}i%&k8( zebRmf4@VQ4pz~{M3=9{My>q9$>3#H}N+(Q_U%9;O^2BkE&O-dgWJKWV`WI8S zhngrHft2>U6>ZEFP>mju3!f8AxsOq)DML-$k3D-Q8CO<}JJ)Hwb{|xwQ%S0<-R4zc zET#3rUHL4qKKA27aJmF36S}S{L&Yz6;OhBoaX!S@>FyGI!eoM#R`nzG*4dsMq5bHM zhbW&KOq6j;-3C=xeYd>M`L${5>@;VZ?;p%jd38N?$h@sX$`7{5p zfY3H(kS=)jHH!gVrxCSbTrqkq^OO31x=)|CV`jOIsY{QE z0%|B~>j!1YIfBk73spCXTkPF9t0-C4Hb|_K*)WH67 z>1!XUH{0ok=;Y=ibZm1`e`NEPsGjd7MTVwz4epPl0q^3&gzEgd9eK-IS2W*EN_D$2 zvkuHcSQ3|Jv@X)q%{Rlt)FAQE6N4!G8sFL*+bvZRZm(KOSl70Ki#~v)lL0Fk8rW+D z|Dv6)+HCP>EQV*Mej{_g4UF`L+wz7Ef^-N;DRO(96gCT;#l!3nRAe3niO3hQS|_2fM8=KH8Kq>D){ z#V?c+Rv11PafvDP$?d7qOuyIrXtE;4qi#nP_tvU>K#K4(uyNMZy+PQYS$^xb|CER+ zv+%FBR6?A^R<3Nx&RP2n&RKU32=v8zvehOd_bYx1EC(l8?Wm?OH|b`fkh0w@r=-@k zhK=7zk+1ll)+jzmmdHoV0LD=vC>-F?+oMKhr=#W-kI1p>Ej+&^Blj%c2(&meOQ7=& z*cq#j^^h&alkL@C1%iGrQ(+&NkztQPT0L&pv!T90F8GjUP2^{oYSNDY{$on>&QD$W zRvJ+DlU=>4-|NdQYn%I+_y(ln`=B6&)4i4Pn0-i2XxM_ore800NHfak?os7uTGTF1 z@AujA-ewx-TMtKYY zSG83=u86Jj4KHOU9Bul6s_-lrJ$daOq|??fDX#Xi50l6J;JOt-VUXE&^p{G0wm2X& z+XPzUWA5SF^oK>u5Y#2VX12JM-rJy#xw%(tc`+)^JLyr=Jj~zpfrqbGo2uitHXy6Niy0);4R)K~(G5lu>H~(YvYSve=?G;M`FFFvm;GgR3{1op|PU9Q8ZYGO%^S_!`rN=fL4|3ln}n;M4(x1N!s>Tjy&XNRGI zqzuTUnPPn29p_gK-*kSb^Hq#8(N*Ebc5&m=dEfM^Y(ftfHvRCZ_0X~509NkuA--lVN!7u9AHF+1YdnrW}IttNPaN|7+>j z<#GtfaC^VH1=Vz9v!qRxmo|J;oL3YqDJl&X1&a#OfOL=!YIfcTfPk8;n5dW}(fapZ z8b1q=1e|8VvQaz1inv4~ohHxEPyOH9F8nP2Yn8+g7mx-B7$M96cOUs80OwkQ0B3hQ z?}uIpJAg03)9Vq^4Il-S0D{vl3y2Gf14Pd+fE$43&lgb{2>4$%QLvaynu`E03IdV- zzy4BElHh;YATs~bkdl^``In!J*uVTBqF|~2>n9^EE_rTCvlO5ulaZE|OalliS*xpP zNUN&IfR)uWMAcO!WF(ZOB~>I<#3Uspq?FZV6an&(|Ift=pr}aWj5sR2Z3p@oi+ z&`ao52;F~!=e+NC|L=}_8RB59?6ue2bImoMXFhW$Iq7PC^3^X)czB{ha3^U-pv~8) zj7I@7d2&X6^4x3NY0TRF$SH#P+m*j!Bd*906aN_KVK!gxab+IwLupXfQ6N{n`{LbH zDEG2p+HJjzl=^L@ByO%=rw1`~zjqIt>Cc1DaX*c#)qcnE{$3cy%1r;>*&&H;_Blk; zpJRce-gz{=bZ(`ycWbkZ&q*Q)umuA=PD2HssZScW?@%o+{-k#WR^|uRRMv{pntxA@ z&vhP={yyK_`i<5{cg@R4pPtjxqvuKJt2>}FY3$O>q(>$)HfY--K34$Y_b4i{sq%0# zcseGdy}Ml8DEyuAgbJtnm|0ehqpe9ed}>#P@7qC`mR*O6_Q90Nps5AGgTE_n?1mlH zV#sWtdhodCC(~UXxh&7CAKU&$o!~tK@33da&LjWyzRHQgJuh$Zdf5T3w}b5}Y7jOK zxu*+>vls)yg`8&L%R3aN2TAx2E<&gB=7)Dj! zdZh6<+?qg2U3Yls8gRt@WNzHE=4$B43UVV4%W3{P{O+FUKwYbVn&n*$(#D3?p;B>z z&@&dw#Giu;&qGrqYNVn&Ec98b6RYt<&n)SQ+RN)&eR>|++*R@V+{=<-)I6$fmVOrp zXH(>+y7R%qUzy{z!_W)J$XVVngOe&<4W~yztFvmTl;!o5{a%ozvckG<733R(rNI>p;L1=5Xg-$i9<2DJLR4YE6N$uibcv(WJ=P=r!w z%WPE9Xiu+pRW^7sC|VQyNcJ^(y5X%&89N!6d8b`-m&oN>nx8$}<@S*P46ON>zN($T zN$_p8{ldb|Sj2$c^iuYdA7{=K3k95HFKu#so{vFgPM0Ega%QG!w1Xn%ySN5j9%nCQ z#ZfT6%;Tim&Sof)|73ok^F%5@74TpeCn7D>>fMI(trlvQT}#wJX#`g?{djv2CVfJd zy(!DV^X!u{{^P}`vIe&_QN~_Dn!E9dqs7`B*FR-FpUlMJ%(yiZADMMhDIpj` znaK?+6rc3RJb6o4=zo=-aM133;Uujl>S5Zg!h4_vc-hWY` zdXV$J&n35iBrsmS|OV8g#2B>{#7*D5ZHJ8rj0ZieS%TY0?w-EO`o{3&exMplVat~ zJ^{DL)qa(|aH4*@KTEPy$)LQS_&l;+V>`CZAG3)0oLrPvXdwVTg|*X*_`C7945p2^4D)P z^kje3b=PnHd;Q_@)6YNZXm2Z{G)mST4n}vK=Ysr%PXmF@C97D;5C$PS@C0 zE#LdA={W}vxF>%$_{JyeZONSLM-$Y{^wOc+Oi%F1C)oe7OjVa@SX!6n9+wJrrV7;j zUExcDdCk0$n_ra)u1v2yfNEcul^qH067<=O-{T3&uf|Rc#iQ98+b@9Pz+499^quUA zxmW#}(LUSdUjP_kOBin{o(cR*m`>i6qL7}T*OTy_i^;Do$41QGdPUVuW@Y4*)4l}F zGq}p%)!{GYuODNQBFnSuV^MT|Ge|Y>B9Q8*LcUpjE;m9vKA0KF!7roRm>a45-srVk zrz?Q(T$Z6Cgj*eto-E2SMm+w!G{RJNG2zQgU?}}o*@vBHL|cr19VCKId)W{D-T_8J zqX8D??<4rj=Y?6Zv34P@ynNM*36HCRcQ{A4=JQn^?o`sM=6fG?^ZfnDr0-RYTp{1l zKk9ok+Ea9~zky!=ZoS0yGNOZCgcn7Sr#8{mLY+tTy@*2a1X(Kxm-R{M0egiP=j!qa zr0L(KN~R4M89$XvM+aJ)W_AA}5DTD50ZIdw4H3$%zj&Kj<(G3l#kQ-mAy+YPRyj zC)%9-7lC$klNfNDc1&bA&bM{bPTomW$)l$Vn!-b%;fL3&v*+lD1(bpZK+GR(0%f zJwMOF`9Wg}b~l+qn$7kG=n2kGk>UNXihj z2jh55#731NKs7wIR24YPTj_FQiJKhW*VP z;MJpjBb)A3z@0*^cj`x=&#n#EwGBs>^_4hR8`Q<9Co5kbXmGh92f_KjG_ltkk^;Li z7WrpHxbl?;dEcR3V|9r3)-ZA*iw|4GqmJf(Lg69{fRnsQT9(+=YI@bw4IcH5vsN7M`{oSm_N|Z!>2)q7KXrNuTRq8efPs zbh89M5#I1&S1bZQsibOoW~~6bc2M=V_J4!bi;gafV&Dp3hJERv`JdF&|42vwIW~#O zBsnLhgd$42>*c>QlG48)ywiVpQa6>{w-V_tvVP55A?>Bh^F`1!aD4BB-&-8n#ainGnO=%Z{&FCa^#4oyv(;f^X+0x% zIz05y-QR%hqZRMZ*n9hWu^;J3P^LT0<;e>18wYzDJzta!Gb+8RgFXs(Fmp_n&XZKJ z%iOeu#vY5V0_)#QSohV?)x%=R*GH00g6R2ikuuv0VyADB(9?I;&9ekOe)A-fXFQIM zqU}B0_sj{=k|MU}*Ld3L(}add4f^LAl-}^$m@s%!+<%BAk11%GC6E-DA(14xE%Tc0 zJ}?vdrV3dewyjz#V3o$mz{5jAvMvYaNM|0?T|9d zNd$3N?>rJ2UoA}VF|B$-YYHo9)#uG(=&%{lcZ#a4#3t)oYSkpu@A#?T&x=AjY7q_Q zsIE%8hre1)tK-%pQDYbS>>8AS+nZx>%~1Qr;5rmn^47JC)IN}hVSl(VlEIZXa?jd% zp8YEd;9*Ggve~81%aDC$$TC`3XBYHUJuhI`TwQM(6&)}<>}vi%3+_NzKb(BU_4;RP zv7a~wS32K3hJ9h9@~L0#dx2W+y+||Y#FC4oqSWv4k1v%-jdSS~;_R}qy4`Sz4tsqH z#^Mk7ZI2o%7frvEO+2w!uY|!lDOl(UtM8Zw0^IDK1;`eg5emk*&wuC-)*5Tkp;sAJ zISnOf>yz#O6nBx8z%=?+6+~Z_zd|)}b#EUIr+3qWd35+i)2Fq)N*LphE4VOjlIwr- zRLb13n{8nM!CW&Qqy-{>r>Bkw%RYcrw#?I1Gv2(28YGwy{M^*to{^UUKe6h>*>o@u zvOnx5VmG_rlddm}6=lDTeC1!FlM(zP|DUs+nI4dy-NSVtZtMS(c_(fMJM)B)^CBMv zKfzKzG`2NmCnJO#{HMe@TYAQzWKy-TQi|g)4v9J#0@DQ z^q13kdQp{0>)nW!KKl9Yb>0&eX`uAgENwZ}I&E|Vh-~Fw!=)77as|$BX6@;ld%Pb~ zrqw|7Hdoxc^@8k0cSIPcevUywx3)5cexTQb8fk@d_7wUdqsc~&@)uiMhK(|;G`@qJq zSGwT$Lb`0{X;I)?%9o#*Bf`InJrOz%>#;OPDDQ@mZFfz)^AJLPzEw7?2av0CDg7Xt z5-4B{!X-I%V0StM{bWTb*KM^A<$dTR87F2;UdGk=N`76fe>9O#Q0`JFNtJfOq=BNJ!}S&G94#mt0%T24|6&%-s1f3)aw?eB=servu-5x`ElCeQpl9I0Kf*c3z zovBD{j|D;q;enX9K zG@haF5lT}4pD7w?ER*d}l5IenZ&Uhf&&hkUdMVP-rL0_jw`6d1u{W`jUWugDZD>hq z`vcBOk^Xq;p2zdYlDygS6{Veq_0rfc??!!3iyQ{CDLrzgP#uunHCEGBBNnx;H(T|x zw=ohViNv@*u|QVd-H*dbkIY0|A?MW4n6|s<1g%y;?eQ->=nY#1d}C(x{K8ntM2L5(ei1SweDw6D zVX?O{@CGh@cz1a=#|Rx&TX{ol*Y1ruI=R7!mDXh$f9ex%uN-qIffFc^XRD7ZAv0U8 zhH(wK5Im^;CUI!IUR_dKRQh*|p)0;!>LU7alag32lgY0pQk1oq+ahVVeQ}Iql75O= zUv0dIwow#OE_1!DbM`BYF8l}Y&;X`H64OokU6Ay zKSIpAlfPcABv0Zpq}8hS_zf6=KN!jF-NXx-A>^`{$pf~Ke{78Oxa}up6;_Ldm#40a za}?*NCNM$<$tgXbPdK`D(MZL!LE+-WXb6T|@E>15TVY#LKM^ES%fP3BrIR_bb`jio zhB-hC zH;R2!X2l|g(D>~o#U>aX{l;I|vU(e8u3NdNt)wD!j#Z}%Z@X2e!S#1UiQW zaq-92DS$le>%RDBh1TfUW8`C-4>=VX&8^etd+G{juJ!K?N@p1;=`tM59m`E5;^0$h zstidH+Ke|tSUVJMkVeIpk86zQ78%bNUHBboQj%Gj-y33Ddmb$B$(Ajy2@F}r5@gtV!rY%wyWg()|!k>HYDoljYx@>iLfAjqb4;5)( z#lv$SJH0xyiFp3)O~CEOIQM}Qc}arieC21DJ>yu53Y4}c-Q^bbzav5b+$`us)86sL z`kjpDUuxC}HksY_Cu)hwY)&e%<+Ih#tbUZZ%xM`X!oPVBx1Km_5m-tl1Ay{jvR18p3hz`n!}}!q4V2>>f=> zz99M>bQ-uw1I$ro*ms)fRXp^YrItAjw4|M*e2D&nOvNM|=C>tL*I19sERs}F0dIb- z+&g(kG2JVgt9|0%HFWEio3X5C(?h`Yui(Q#w)?x32~Hs<9_`uGOL}=8U9&!EiyxkS zZo63|HtxGXQdO&`Zfxe#tt4>%+9rq?$ZE?}eVk@`zYJI_VEWW)Okr?Kj37mqdxkJ_fDFGr149EZDt8RJ5R(6zc~)PiWE z^=4!14+h0R9kmR9QchkG>bZ;$|P9A5Il6CQt~N#})&**nOgwZdwZf{l~x6fP#DWukvY(Qako;ryOqF!}Bn_+~8& z-_#qJ(|WCVaCPL-+tgH(C|bTgJlqto*R{eE8m1~HEnZ`*W1yuW4~ zcWRR&;#?WK)0L+(T0M!h_#bI9k>h2n6dAcFTxv^1Tvd!m4wGvhdCfTMK4Nyr?sgb9&Bl?Xcvg%FY*G=~RMJ(S-- zjM<}!Hd#|m7D^^yl}18{c9B15>OSYW)ItS0tx7S1mh*YZw6@%gR7otxiso++Hm;>7 z%*Y`(&#F@LN$Umj;G0x5RGahFa6#1?J4!80_e%)qqC-7rN>qU}P*Ve6@g`oGh-Yb> z#$P`sb~z+NY@U(359r?Q)@$ecGIv6DR<& zjIsNZp1miMx%+heMSHAr{?!{&T<~;8tzG%Xtyb9L7*9xu#F{tHhjbR-7SY?1&++Qv zQ=U>#CoT((xYue4zZijCRwWN>bm*9WvO~WRtQ*#}?8}yceIr~yGfJj)@YOL?%5S}^ ziO>r>#x4KFpT40&K3YALHkmfQrX~_GcSEX!2`#rJv2%K7Y?)%tzIY(_muPE@HrZ$$ z>C#^tQpg7CT&!menWugiVx`4SI6EDY(!ZJNt!h)?koSUb(MbDO6B(+dd9RVFXdv!R z*Qg8W>e34H@B^0_FsZspRGCNSzyF{+B=@tr0IumaPvG1I0Z)SaT%F<09JEnCvSR;+ z==AIM$=G~}<;~d&5BGf>0&f)_K|^zO2VOuU04Upaj?zAK{4q|JoGq~uMugy2LX*}s z9XKMqazxOjO}y{^7;<3;Nz+S2*}1gtN$`nx6X)#t=mp3hm+w|AaB`(kZ~Wb}jAN zrX6y&c)+t5&5g4e0yj9_%k=jB3(nBN3Wctbp4%5AcUK1#cTv1J4Wm9#QWqX zb;Sl~gF%yj8RrEe(?V;$(D(|()w+x4F>Fq1iH2U}H|SzTk?|W{PjZyws9lw<;eSa< zi;0v`gosh~kEv!=t{ia!1=D6SaZJGC?OSOr{D6LGo!}1(DYFCmmq&$RXMcywBvU7& zAUu4jHAi}|k!oo++UfMgfN7q%wQDfW3ctyIr(MjuQpoczHEo&d6Jx7fWPHgO1LbsO z9;Px}Wr=AqUl@1IAM-dz{)6{&P3&}c1hMntBk|67)Nh(8BcXe@%2XSTHd!;*oa(~n z&ww69xVKG881jMFfaRCUiIEL%N+TNMIku?<`T)N*S*lYm-NRH&K6NN-%}QH1YfagC z`r${+#^GsOge+uAr_Qs5D|5wY!LAurXW&*Repa6x13T<8Vx+>$&PU-OdkM_4_V9v) ztCa-?Q(+F4F~%%T5eRmXKBf}+r5Ej7z#fNv!y7u=f6zy!`R+}21!Je~6I)h8^}nAG zNJo*2Eq}u|uEgsz@MnoyLb#TWmTYwSUWMPPe@K2&M{vFS79DZ7C}wi(?a6(_Rk2a$ zt2Gw{Fvz=17t^SKIU#l0>L9P_zg2TltC;+CeXBaWxdS7XyUvV)tpW+_WbMQ0U`&^J zHXEJtBvS@8pu{z9e)o9T_a*>xe&^2;ujPE&nDr{+A2qE}bch+-63;jAN*Q|eHt5j@ z?-(6CYBIEGc%%C$0-B9JU&yF*Cb zA-b44IgCITkiB8;?v}rva=7dfDI%fZFxtQ}Q6)&HN>C}fhkOJNRP1oRf?#)DdTyYW zN`^OI^4M&^&5|w;SsHOeL>#Q|)Fs8@9=)qdX<>tRCPW_x{uLr;vwVTPW5xW(VTMi9 z)(E3aRngE^UVr4K&+66BpcgY;U^ME_tH;>6;Rvf?t?pbsXZo7PQtu1_3?rXT};>p zh#>uwKjuH;(mfzKOOJE_T%9$k1wG5ETO62gQ)L3jqHbKm`Z*qWCs7pt4$T`Vh*4&PEV^>dEQsha;$woPH{PAhP-2P2mV<=% zFThIN0G<30>_w0y@<5G(17XvYn5+gF_Zt7|#+gl_Zj=YQ=L&K7d}CSdKlNrP)~e0C z;b3n*SjXc56?x1-icC46#sY@?V#i(VptP6x~^b9nVR%E&vf`X&;r0Zc8}(C za-~92(3q#qFN$WPXhu$EDIbo5b+TS2QIK_!)r5sNL$?sk;7+}y=SdG~R(Po{y=KUHAekw?JJU=EYHP|q8s>F96 zCq2C-h%p)+YeIGc`(jT2N$?L?cn68xc&@sq7^XM{NEn@Xst;u5J>z;Kw~?TD6(_w{ z{6vNi{WUX@hf%HLvdTO?d)$QmG+}fgFpaxKyY6lH1`avDXB>CTHs8=&P`Wef`OC&p z#kS?vA2CE;(%o-U;t^=qOf9y_e}1V62$+X>xAGu@QcrG2d2c>=v^m8D0WX1M({Ad3 zge?T?Wmxf`A~1zf%Du^dbn%mwM9g8!t*P4?1#d;f8`lcS5$u*|Qfsx?k;k55D|HUu z<_FpTXeprM;3iR76c{zP5{`!M%J=oI-lLlC7s-iT2~7m4Tg7+)i<*z`e}8z=Z}CsE zxs(I&!fUZnwI|;2vFr^7Yksy6knz>gj!jNaxV1`AW^Hbb!zQOMs)K)(jVbTL(*ru; z%$BM;>C6=}z;DPp%Miuea&mrZ6)7DsX|;E>F1omfb~N@6Zn;zsCd1}yj}6-a@O+l@ z+1c)hEiXZ5MYf{IaVsq`ADe6Lo;Ai7H3$BSz3G`d81McFWR!QieMhgyW^HP;%`Oe3 z9o8XbF6b#@T`l`a7jEq z<%Dhrk@~G=QS^b$i_NC`-kP`f5YIui;r{9D{Ui~u1fuar*!_S*^ES2mIYvHZzuP@D zQr#?N8G99uG6ebmU0@^Ou+60iDMT^^I2c>e0 zE>NqV=+ZXKQOpY68!XdYMnh6mDmFVxVn0NJuQ8)YtV0SV3_JkQKui}JB~cS%c%kxH z$d|4-MTG-^hRgUX*G|{u_|>vkA1N2QYM`Klln`nkljq}o zux;i`YRz6g~eQ0=GUL&GaCY7d-@|b2txMr^DwcOo{2(4|K3^G&(5%Sgr1Vf9jr)3H; zcK}k>;!)mxZ|{p)D@GYE2TAZvVTqAWDE+AU3?YV&Uh3HSzVP5cu~lDlmMeZiS(@;r zr{CXLi%YO=I*gOF-nxu)+O4$u3H(|TB=0YVp7p15H+=UMhELj+-KrC4+D8kMBQ(&93%$%Ox9rLK1J31`{hcNPgxUjtcNHCdQM?Z0XXzBid6QT4VR})QznBq?CX= zlu<1Y#WS+pxe4MD^L`xI=`n>qTNruval5~0O2W?F%NkO6NkjJffSx7aw)6#fZII#S zsu0GFD{EEH`+#1@deRMKwF{k4IkIGa@d<<6xG zXw=WI@*JmnZ*6XQCaj)siXW$*q;GCzB-bT5vrOWNkPcD4$Y zjG4V~-chj=Ks0&EgII!Ql+-n2*Za;I3zoN}jLsqpq)vH+etf!kobMd7YZKR&ILwyq zPX;M*7ep`kBxFj2D_dTJ6j{U;;egg);4oYnR`H~~V4hErVvk1;XU}}r$}{&Of!kZy zCnoa=`tPz?tV@?<|sKcK15B-31`I+A~H zrZn^H#@PoVbFgAu#v1(aD@Ckhf4UH!y|RRtdc}bI0)lVz&(k!KdJxc$j6cjm5(MNi zS=Tr`o^wjeD5xbA^X+zZlmNvW7S!f(|K$fHh5IAD`Yf%LeEFX(>S3+*Yu1f-9j7t3njDMKb2^z%YGNHqhX>f#q0vL4rt? z-uD^u0|6SAF`_`Mj^R$H}wE?D7}H$e?__7Zqi}@ zlMmbsT?wd$WXO-v+rtljweXP=@hLZKD3|AVE4@WKSlz5stK(V}rSzIl4BGR%l;MF# zVy`1Nv6|HxM4DnUVp*G6T)extOHDl?-)JUJ2>AN&Jroym@GAZXWl=UhGXjRYz8Q^l zeRDMF1#uRnERaA8lirFHVUk1t-X#fl$cxSDhTd={7O>Qjmn6|a92$s=MXHvfjMzY1_)h0RP_sfc~|iv@DO+q2ku z@3?0aw6}AOuZ~t@+g^aqQ1Q5&VnE8F5pAQpGXAB`=LZ+F@o>e=Xvz!<=wCZ))Gk_q zLHj*Y4_;MGI`lpr=Q(g#G`f(BkTUCPFSlgB%B3dW_0`hB7?63n-Kr&+es;@^<0_7~ zvf;xsSPKX#IS!PKnOzaD2IuMQy)cnIiuu%xLW6_EPpz2Yiyx?MkSdAmnZsHa?cr>8S^VJ#Fw<)b>_gwnX`*~K0fz8J;P$K$>{ zi8v2n<)a@bF|cw2maK)Cxo?1cg2Xxc)W>fpcrNXBz4@aMUTVVBlm+YK8u2@$Z z+%*~Gx1ZvSZBu5MQ%h;VU8ptWlCq_UX|nD7aaX)4L7?Uhg#>;0t2j`n5&mtK+O>}P zYC<*T*Am{N^Mfn1&b6*5&6iI|urr?Wxhn7GoN(RAM49lJO?joolN%DhC*0)|)XMmm z?rhWeDwrn(-F_#K=``vTBd#55{E66WusZ?lh*pCgQE!42#E|MdX=M9&Vi#D~af0a{ zluq6Pg3;veDe__%PrB6ix6{yJhAXWl2nasH)~wdj(xx{0$WQm?lf2SeSpuy^qhEf0 zEO#{LZeJ@xRtnL81x)7~$|=*=m%WHJoo^3rwA{u1Erv1Yb-jg~iDV6;b$R+2%=#Cq zOPqp%Q{qu|;%w<2@9iH(p1Q#x&^vJeWrnkgT#%y)H!3EDaPwswBFNyam4(ZdUN?5Q zt^RmKBXaPSsiP?*{kUH@`hLgO8{kB}aikN{u+ycE`06OJf6-FD=pdrXuH(r};S-ZS z9sLq@qW6YAb4J>X8C@OXy}-4Z!&3`N80UX&O3mG#ir&JLuB;Tw+n@koCQlF40$9A9F%3~}z|TbTn!yLQ8ke|-+iaVzmw1Y)hs~3f zZOlP03h2!~tboG&B?qB1jFS5qW zSKY3z)vY6+={c&;*&DrX8a@#tch8SpPe{I3Env4hbBHa**XxAliy2`o3;g78P0=59I?uf>oAIpjGRq(;Xm;y~V{(v&!o=9c* zvuz5&oFA;;1}HGOsWR^`QM`ft=srwkPXDC?5hHinhWKdGJP!!5U!M3xxlU2@pvv13 z?y_d#!;Qt~Q%qXGjnVB0pT`*uHGYgG3gy+OuK35b1FF`qig zXxyO5dHq=()$@Dvat`LX@f^*`i@$1k@H1b{(X80Zh;qG}HyFcun@@S~H7&69fzkQY z0K^xG9iPN#^0h;sND=)KpH4P3_L_~sI3W7nBR@&(=n?C(I1)S8(jBXCZPZ~9)*{id z?D4+0Q9ZQ~eE;L34-NK?W)gWC@M{pVuWK8mW8O6gjSalvO0NYec9(lhalRsk2|BtQ zU0qqW&+yNQ09!1(YZR;g-b3Uf76o-}RH8{ZFv>R1B{9>fzR&@?+d%SS|Fn12XRr|! za9b@N>>YxFB3D$^TxtE|>znyp3Ns0V{)Q`?vjQL8>)RfN=nrLa)dpt*FVHFrXpyUov;F_9kAtEXIvc|}?FV#Gi-!EjVi}obA#Cf@q zQ#8|@G~m_Ro6giBwG!ohpH$49<33KD^V;SBBLtntkI9k4z%QHps1D-BM|fV5BK4a{ z^BZ=^ZV6CO5UgJ$j>}z>?XN7o9}oH02F_zF2Cwny|t9YE}L58}OwE0${qolBNzN`m@+^Am7o@iR%v&{-eSezF`}e%?5Si zyQ>N7`cEfiQJDA?1iLMo6o(0PXm3CUs=6swp_>>j0@N>kCohRRcaF1{Nwoel?jx;& zVe=WSMZJfd`Q-jYMLTa`_o&y;BqxS5r>9WSOb0N}0|l^64Qgd`O+4UBCOdD&ciTf& zthVx#HbLHrWAaMW@E%Z_6KnN}_^?gAUaOc@f&7K2GFu9Nd;lI?V; z+#NsT?;$$IHF!d{n7e!8*t65Lq+q`&)s%h|+hj}vSPw*cNRvITCWzWLF^^n;#j&m! z#18lZ$8dtm0R8$r^6s0NE=uG=#&K@p{>3Y1EW2IkAEVWc1rGmh4F6#`-Dj`jq{`Or z|MzW}-KiEDrQ|SC35rxYF35jmoQ;H}s}3TbeeYG>l97h_hO=lnu)%k!vGiQ2PI2_h ze~SS2EZqN}ueBhzcG5&>;Gu@|B2M0tY@pBx%4MFC){*TNyLMnphz`Z^k7j(aNzLM| zb}rMg6^mAPhI%2l1tw*6UoFTX-O>6?H9%4`IJ{RgdEe~(KMiDW0J#G9lTPq!;yd&C zoeN}2Xzd!vpjIYOFN}`sI#hW;)p48ue=9K-R?I`hQWqi&*d4N{!wE5AJCGU+-l+|o zDb(UHG?@?+u&1WFFCHj0H@&jme1y_+xT<3tc{xDOc#pt&xyPL zj^?wZPHq^{P=#rm3*3^KIB#=+`KNcUWLDiGKQe2#eR{8sssQMFgs9n9p|DvPPd}irKjuA&CIo9&zu$ty1SFeQR6BO_z%H-3rW%1CWS7X5qig{dOMW3X5 zlBbM}WBP}eXL7|txXz{I4l<^-1E8o-F78*e%x60u*hAL^@s^aZloG3kOyA{Uai=%! zRcEjInoz#qza3RtW@I=%x0=fMy3AKK`hlwyqNj4Z>cDU*#l8%d;M-yRdD!#kwyv;V zp*aNjfGO9@tVDTJpck2PYPj9jVeThoHswdR*3FyE%Zr0K-MJGgUrD=lnAIuQp4l4| z?bSKhE3%FIUtWGT=H-KAI9x_Cga#+oH@)@}=>ES=WcXc~9?E z7J*3VhlHVr?yud%8-~Y+YjguM?%XvvPi1&xmB%1t7;$h?c78Gz4XfW7=_u2%muxzY z-pj1~)v?ul-q9h}>^n-;{PU(!vuX`0k*465vCR2w^t4gQo#|gb=D*K|jM5JCu^)th zhLlB!y}Cpddn%XAk=AfQnT*nbr)g?7o`jCGg5FoPrnA3IA%;p6@@(3CY}e=slMC>Q zzcjWXt4Xb+>G7*1+HHPEar8~T-Sx=wL5zQpb!Rs2hB4e%>PTi~tW z0VrCL2hIbe4XdZzJ4Z55_mLV6uBk7pHZQD}3qSSqiZRz+O?T?-UCb`(b4Q8gDC^)K zRm~WYlTR#uD@Z3T@zM=c)zqfl#y66Ijx4^Kv(}IICD-w7C~NYi*e%t5YBB;4{t}l9 znfi42)I@vj1E(8P=DjB-a)Afpi%+4$I^bY(ZlTuT4>I0qs;NrejRQy3Y5!GR74ebfGSnxLKHCMl(_`tT$4EzGezP8THja!TG zsZO1eGA-6mg%{uAnJ!;7v<5h3x(nBfzO15s_cvf^?j@_fp!L|dn%+#?-X?*re``0q z(;%AC)z8MAb7mgMSAm}kDrxV7FXGdb9Jfl&Z}bK|3^q5P;^i-(p7uSRZhl?3PfeO4 z$h}wmqFv@JWM!lSI6M3io!RrnFS|31IdVet?7V18dqU>-AvW3sP}KWI`QBGQ4Q3=p zA|%sxv_DRZy?4-kQ@GgVY#TcUoNY$C>k^?SoDjYVPV{$!Bis zZ0_o4PVg4cjkOwG+LAufCtwwBJ8Fk2WOF9hmhVQIzZ zYzwjGGq<#`b#O5K&!CMf#M;!^)zQI}+|`8-Y7K>0+Vh#4UfdA&#?;)hQc&8G=&A@j zNr{d7KSuNLi?H$Yv;BWQ*P4<(y9fjOOG`ZlRa-Yp2Jo?kI2i2Tn7Y`&EKM2QES+Jt zPzZx4uMn>Y+(B9HnwXd%yh>RdcqAmoz<=?>0AV1!{6j!QL{#v9jfwFK|F5w};H>|9 z?jxc9b)6``n8^Qq-J?h1m)G6-p9_Oy!f+oIX%ldh}6_`dJ|yZ5`#y*$7(&z@Pc_g=H&w|;9TEADz+?Db-X8#jb`;Fps0K%19p5tAZh zs?od6$c1}9awm@BRmf|~2hDzVf~jp!i5HV8JU)95?>Z^(B49Xfgk7EP2f2{LNQ=QD z(N8hbG<=^ZXnQ;Nj}I6mE>Cst&#@YgiBVV%dYPT;BvpNV@B%oomAdr&l8&T zdaAQ@G`E+yyO&jx)Y*CpaESs=Mi%V3?x!qyci>zaG6+Bo}w+H4Oz(C;<((N+j_3Pu=G2zuKq-PQlxTDAwl zx_dMXpHxO@<1~{oEdzGnpn<0G-Hgff?y$gYBPikBGi5HWlFY93yTTN<9yFm{rAoHk zcKCPCdL+q$Ore1%u`&aU(fgVS%Hnn21$nKaOlTJMFg>0}k9KZGX*U1P%0}j%DSwxq zj}?b#Z>t3^#oqPk4t1(nqj;sp1&{3)0tFSEsIQyGi!<|D0@&Dr*eexLMA8L}v5q;cqf zw#3n?uFae?0LY(wE1Ut2M<^E*TABoN2`DlmU zjZDj1OeStteXpXPeZXTY*J#XHzcE7Je#1GC!7nJhl|2G@u35tLmGLGu>&={0wg=m3 zj|${-CK|3EQ2iKBRib;Q%{IV#HIq!#GcWcJCj+v#JU$QdDy^s};2CW`Ct=GGdgwnl zs$+@^;XZrg>5KzZvibi07=>pS$-S2QqKNnI$F<@iN3sS8u^7%y_MgC=Uvq*hct*MOwf4pudBt9}{(HP;~ms*i0jC3g~0^_;&UpHUy{M}#hEdrOg?wdKN55FnflCcJ&hQO`o ze!5N@rU=d_A~Q~~=8dE1!-iSK3v~cgzc4+|Z*xM@hT5J}?6*XNv`W;YBdNP2gPVFs zRHN3PCuD)`Bq;6{yJo)kY;Bm?ZdRgnxRxZ{qAE!cS?M%OwkAaY~WMU7QJ?? zw&j|KS=N=h-5G8i4G%Wv9DKogQ$gtUDdp6ZlM!2EEr=+J6!q!SbI~W%H|gaq9VZ9U zld_l6X$>`cq-MJ?&%2L2MtNG4U*n3=R*Y6 z;V1@!0|g#4Hqwd5Cch5kZwe^~Z_baD;1s52n*llMJIvW$_H5z&-;i|{m5!d=sY zT+N2iVC^wV61{QfG@6E5)jRLrj0dG&WD+;9k?tpKr==F)Th{`C?x zZOo$|r*kzbwqG78x;OsZfA_Ke3?$>b^{wvghpEMuAEUy zX!Gl_J9XC%4CNB{ZUjh>Q?l?M9cB=xe#(8_?Ek6!VOod?HvKQte*9b7cqF-+AJxFe ziBNc5ysq}7FA}QT7!P03j+=GskV0Wga`5o(OY?sp$WkCrAF*WM;belZ zuZ)PtAj-Ez6!~*NO-S1reqMfyo$#~lZ9+ErtQdn{rA`GgkoxJ$c1}HSx*Qf5rZQ9J zS-$zLH}dRrybR-w6zM6Y&!Rrg_%Es?o=zR&YboG@YThHMMxBVaYqZ(q5J!2$DG%yD zY1s~HNj((G=FD)3%k%*)TH8C9R89-WgoLLtk0>YXcCIznm1OLOqx{{pWGs4$U#5k8 zw+?{Who3?7b?V0GEVZ9A3*zYvi7FfE4tDC+`s>hpB(WB73iS`7EWg`0440YtH%+QT zRkt3KdcSP!UlL^2$&aP;@w6<4P#RpB!c~;pBl+!dOGv@0)(+x0^uwx9=t2*z2ky0r zHRV9JS4hvH7kEb>S2+xPsN_p^i^4kq!xg9loH`l>?3`K!1^h9J;|r?q9MliAu(2BZ zjhbFD{Nn$1f4zC{=?~FjGK{lhA{;2fTxSdv6?8sO2o>&mt5}lFL5pn;()s^t;pWr_6T-&M?;H{tb#BSn9nc)3Q5oX0z7iu6??&SK* zY564TqxFn>tI6nHW_Zd&==3($GEl#Ro4oHw%d;9RwVf|~{z2dV{9|@QWUhUk2;e7C?4Nd8*BexJ zj`{TBSdbu!lv~f0zsl`WQ9{D=LuLN7h;T&+f0%#ECo<#&zq35skbnz&^&K!v36w5z z;}EGCieA1J`Lfce`^>Ph_oUD)u#bAs4z8ALDIcXcWntW!4V`=JtO9|*3L`VuJgQM% z)IsELY`CDIgD9l_H(dRU_K`-Y}^!Ne_1RK$}e_&tU1UXY9G zUo!Ci#|H#76Ko|qeH#4v&Hx*A1`YJv>1zFx=c-1+zvu+JEK-$|0@W45++BRUh%`~* zx_FxE!=&q~_TsvaoyQD04zujke){~{OW{PNqD%|fqx>-6nk)+WiKN}Rx$%y&s;_*B~b1u4PtixqK7GdZ}S)AoqS z_}e^9FtT1HcyM9p&aj^DFQ$cSk6n7TsiPwPndN7&#M2IA+kCYPyL@X9Qst(xn~xlv zc!j^*s05%{I}h1HYDm4w{0ij{cV!9jZRIo zPsDfF5;b^ro~Ko<^?=;h{-#+mEt2GITddeCVgqwyg3NN_9qVAa`)cg^DrNCTIH^a$ zqR+lA%2a%I);N_9!l?uD#Tb@Nmxtq#y!QfCxmh`5rE!_Jzm@af z`(a&cwh&gO$TSI2@#FbVCQ%5!c`%P+P>|X(*xze3x~R7wL@0!KS2}!%Z*Abu@(5i) zc^O{Ri&?xeifkmw!$sVf42Wgl*Q`PP@P>#veoQ{;$3I{;;teNfvT|<$g5DAK`ZJo5 zltq&DYRbf(-U?Kj2^jN1eOL;JMF?8hF~evckqs*=lkl&82?Te;#gi`|5gP>aBe(Zh zc=*#y`Oe7QQMpx0-pPrHD(td)p|_4-`_ofn%Y^L}5HwpR!)z@$4FUW zRW^as0pG{28WEKz>u>ze+6@*(`YZ8Y8mbDd4B*#mY*Fqc0_V6h-o^1POw*B5LS2g= zA+JC+;c{(%bJELr+sCC4)mlIwP&avb)8X$L8%Yo?MI9<6N1l8xa0wRQ5vO6zR@K^h zdTynfq5nRVfmkCN3*BZS=4o>+a> zE{c`cw#2)wM=PW0Awy~8uIZpIYW8S5NMR3V%qeD;(szuacamau3hYeY`(qB0q#n+= zQDA;;&sJTH7cqj!;$yaVWHIR!5PECr?DSGVWd^`1THc~nfM^+lqfIU0hK zP|=oEz>TrG?z$B(hUDQd5zNF6h>_-Hp(0UjKlP0J4nBYyz{agSbU42yfY1qsC5WVt zRz{Kmpt2)jgJ<&MEKQ7T9M%3Cq=U-|%y9kr=LQZw=nGe(Y9bH55{t2$PiN8bQrD14 z!nYql2GU}7x!h+0P_tuNfdL-V3K1%YHF(`WF^}ku==+Z`nPz8v`66|O!qfbCWL4iW ziUarQDBk$)!m7X+kU%frAs9Wzjw%&1PTe3R&;G)O~q6) z=RcqXC-};It=!_n{!PUkGUxPw8HJD=bn@Lyoho^N(Z;L?Mz%Z=?q1S;`2k zMjBIh1!R1ZbYCP}cADsYqw}5%^gH)8e>hK`Z1v7hM29um`U$P62&1|+?~gTA7wQu% zPOfNp4B3>UW{}i_c#5C2?{><%{P;Z=4$hr!j<~K*d3F+dwHOuVD0>ZC0m`H7<3q`d z(#}G|j3Ob9@GJ&zwy7{j<&mFCa4{{D?g9$)?y2ywTQ#Vlr|22}8r7|+Gp!$Q!q`2a z%vFf-SIs?6%H8Hl^gG?`u3S|UJZ@*J$~DfzJ{;WzDt3(1d@?3s$_>bV(@@tkli;J# zVZ?b|AI0aMzS~y*VF~6T5m$-NNDt;8Eg-m*w!G7JevS2NL0zlUmURExkW@0@%{ zNH7keIA%x|x(mE2mn_ zN~Nzd2Ix|YF|1mx68G*4!6mkMMO1`2<&YVsc38%#RP}?y)`Y=Fz9K!&Fr*7Z%p8ta znc0fVU+3K)ZkvhFu`OjP=y%#%lvpDK-H<$!hi$_=KR!`+H>3ctQ&#>r&)By@-N$x6 z=Vx_hJ-IM4f0=k$82$Ju7z+TOG2^1Q->%t2P#J{5hAkgW9DWw*#-VoqD7*&76(5uA zVlY-JSuu-|C&if2xJ}e&Zj*)?5|F!TC`sCXccIv`sORS$GWEX-m1FH9q3fU++$3uN zA71c)P=2F+%w`fF&75GoSTF%CGh(lA7yKJiVe@XwA0I%D%j1pg3%K2BTU%}gD@W&} zgtFH08rG?!XXU!kC*|>3B(=Oz@aq&0fwct}IG;+DHTjDdupv@#0o^nR4Tc!S2U}SP z@A3w>`=4zGu7WjezQzOAX0-2oW5UoxFjf6r`{pTu)A7A}^aUwwP?w~Wotsi`@{sFO z7tWLflG}YI5VbrdQFK}O<))CmXN4nUG#MbAJWMs^)^{eJctA6dx5q5^38>t>V1p;o zYwpT7Y_&p{#yA54#MaL^!jhQmT7(G2rEk>2r`@RS9oWs(!(OXKm(%YuD>_+D29AA* zvmM}vc0(IiyjYS^?-^@nN8uWW6?TCV-pKAoJa^O>yL|B{+Nx6dP?bPR3d*Reis1Tr z9Epzf3Art?opa)`6|#97Ab-$oU#K-)``$o)<&jpcZlWr~t{@vK8YmYNuIE<7gMLeAD2pPhq;l zvcD^{!(X^8;5u|eK!*X9XRtGG-=G*=deAO7^Sa$AB1>!qJy(kB3OhKe-(3aE%ZJxX z;I$8V0X*x@QOZYlKgaLIrirb7!LR2~oFu7gMBCN7`v^`JHgbLZKMNshRH-kzl+-=* zKl5m$pSv6dbGzti$!_T)JIDO!0<3C0JY6+(WG40A_rICOYbCIl>Ux7;d}SA5cVpM` zX#MZi$abHE)~{>Jtgj2zxNEseaEv*vCQsMjCb-n?=qNgr$=p@X^06KG{?9FB^<5j% zMzll{8PLoMNO&wIr_h$w9$Spv5KB}HGy(CH8<|OL9J$f0oTqsAqKiGo{?(*QOLmuz z=m91bQSKnmdVcEyHtz5eu^RbAd&nmBz!XON>m;BhM*YOsu&c^%xZ z*XJ&HHaXt7E_;RUuJYw#3_~Y&721XvMxGf^XTV18;A339o`{9ER)l*{QQWp$C%LrA z@7nE#CsEN~{|;}ax=&qE{LqUovKl6m`3OCwDHw2n^GQz)HK!#bCuW#TUJ*R#MMYu}z z*%Jmnz}|6veXJ`$b|IQ14n^Id&>OJG9ZS+>;wsMwv;;lB{J~Fuf^Q+aOP)*;XPK2| zxs>*xbk}io9qSm2vz;~(f=I*W^yXuT(AmUiLj(x?cEetx2tUWhNFn~aRR+bfUZ$(O z*)*g30}f7kB>!!C+#c$OJU(xLQbQYazb^Ok4XK>ngZpP6$)plU;t*&K?wG2RC(w~f zNfrvJ#3i2@mdN!RP_|OY)PTb-(yb7p(XFa2m3#I*=sBF-pir8t(LlMzk~}6zpKh5R-i_e`;|bA*TGgR`4Xh#jGPyz z#?4@98N0|c1Qwom(Z6IpzO=lL>I+Exsf#9TA-*N()o7Ve#TclcAYMQaE77;BV}}2A zpY68kqTF|f%Xfl&p?tC>koyI%~Z75K*;cS%II9rCNCL0leB z`o8@xhl2pB02&3c(?#RP+78_Ljww=1uHW=Fi_i<+sKr5|X75y&_~BRk5*p45B0U=B z(Gv61p#dIJh1I^0YG2SG&hZ|(RvDd3EPKUVoa+>Za=kJ4$+|DP$?=WUo;A_`h}jH;eA?%e-}k zUCSF!{Hb=>IJdi0(~Vo_Zgunh3av@*tx2b(upl#VMc_#EbMtEG*J>zuj&nXl10b}M zs!QiLA8S?Uu*n}6d0RHkG?*2&^I9~vAWpqKj!{-L`ueQgcYa{VdLohdf&B={*Gl$1 z7F8E>ft)V4bWBK0Ywb!OA&`udt@NztR(ZUz80jgcB#GY$@(}?<)m1bO_7`6aR+_Pmel)lu6*hVp&3XsF zqqVoW{?5hury*39BCl~|x!sANO-S67!t}|86wMn4y#|%`ic9^#o zR4f1pkopA#NSE7@b2_39>_rCGSqfY3vd&8GTM;2~n>Z}1rG+!Tn_#ll;6#5f*nIG+ zwsq?fc{e(bPuk05Bti;|G)Jkb`;q%sMH9 z?JBPuQ`WhKoXrMKQy(k87AgN*Ze#Y*(>UK=+tIGqe_ufSOysg22dM1#+KH50Vuy+O$5Ih-kRq|!cQGq36B#5iUSUD|l4Di0Jk zfhpvQI{@qD#|{q0#WuIAhmk`;l*rWy1oX=#2h*#M?ABGXc-2R7CZwls2$D*p7t42)Z=%B-#G=>zT<<2~K<2+uweHHLsndb8uBZgMy8L5C848NM0D`Ro2_&E7~E zmxADAU@H-1TW4DwJHKnyNVfpg!eq!3n~40;t~y18`6Oi-;s4+n&K+-7%tA_SRL6~y znSK>amig8CfWb*lal-h0&&YB+!0tadM@*>i53&YV!N$H7y1h=K*D3*PotjCg$Fai| z@u_+GD`~%fsv3PcTFHQ}x#kQYlGp!&lq)(&Ui@hKC#vcw^@Jd+@8Cow<#;ba5^Kuc z<@WRuO4w$fVnZkaVOhnRG6UJWD^3790H|vns*aHvJ|N`LS%8-YNJ2;p1NcR5u>5Lu zqLBL+4N)sw1%?-#|2>-XAk?&vg^CN3-&hRN4(*uol!cGI=T8(7@rt^ow6IXnby z#U_mj|3{+dX1$4z<}F$g8aF*;8Mf0yWM0uh8l_Ygd4paGRpVfV<9IY;*NB|~5?h$c zp9s~QKUiki*)CE<+e*t0xx(PP(AT3Y`dn@Yi7|F(B0T?R0;{T4?sNmyMzyuRkKu|&_-z7js;Cux7jM;+7mJNU&T=ebCAv$A<%zAa>ShHH9zet zodKS_yjkQ%OD_AWPZBE{H0vL`i*b%%*7~=E>JKG2`XEM2a)fMiLW&bPBfl#k-Bu3_ z8${4hL8`QW%ku`G2A0QXT%;2aR>XEO2Lp&X9OlGpleOLh4GlO=WrJ6 z`Lfs8`pVa7*5@@=V=C?H7P6X!o^D-BK=LM912VB6A`3TonvL{|pIGV-J>7o!+ulO9 z*6{Y%>-J*w#unnw3eCBm6|PZlcgq=XnaMswBL~*lr_&2uM+#fnVP?4Xj#kED>@(GF zEDRB@39`H$>}Rakx%r6#bs_g}xmpo*PT6{T(lZP#9cG&FmJMy@$K`NU(dH$+1opQK zcQ=9MpiZ4*3F>6=(&aD_ue?)i>cQp@;{~BL&S&n3jIC31+HLeXA@NvS{%Qn+w6O>$ zq5MLPz`q7na^rnuLt*vOSHH-l#e474uof;1*3?e>xgXvx(!Wkyix(Yhsoctm7D8&i zdhY|dO2MYsv*(EvA^=*X&H)KsUZ@t~cSJrDM!M}tTFQrb^h2nQ5>K8$`h%!o#^M~9 zD9nCsoByV7ya4T8Ke3)T5eLg2_eYQ`Xg%TD@oR*u+3UGZ8txkl>bt@#c~ak|g>|mk z-l2O>u>QnumfUs-xW`ik3YCQ*arNYmLijU}vn2CM;jSr?Z>W0Ojntx#k46qQ` zVn;KkGzuaeg;VA#4X%N_3lSPBi;nuzMLipLfd8*7%>0OFkpwOz|Bw0MnE~KRM01>7Cd&=6?HghJ4>GH zf7qwVQe;tFW`}s6y;17=;P35UgeXgm4P#fG7C5#>wC5s=^ zzpV?hAGovn9D|JZT7oPr$Wn2}uNyUG`vASyxiZAdQrG1cGQhvZ5zS7H*TELJAIRgd z8sE*KrObTPRHHyj6hsq|2YL?KZh66T(RB~*R85_}1CcQVM8>g`tI=T^;c?{Cr3;5g zMgaz^A?1*(sYul17$Ax9nw7~pr#RT^hTp|jmJ#S_B9_AZ!Iv+0*8FW7vdE*QZY$5GHinq7^dflO?So5yZ_<7 z_C+_{E zjy@8=%2as4vhX<=;df3%tV_*2!gVFz0_JusJ@?c9A!k4)-5}f^r1@S3msU|ZPjX>4 zlW4uqg$U74x1Sl9Ikh0Xpq9rzzauJu4fbK=#bNA~%2M1B(&|XClB0O{U9{ENV zeucZXRGV}STv>8PRuW!&jB)=K8egB2mgi?YOIw#7@GpD?ZhuZSUI%XOrDk07wgZ?@ zJ5R0c`Y*%5O+r%1kp58kx}>60iM$>3Ts0=_biW81sog+Y@bh=tbsMo( z)eKNJJx)Bj>IxkHthsLEB;{XvVECc-q~^Lau8~7H?ERe{DAwr8w3Mp@SF;Q~Eos<4 z9V`rF0S#~q_@;w{&eg$zoGC?v2+`feft?XN9LaENCbt!usemRpR!!L%Q?Y`jj_|x& z@qF7%ZPvFw=LWfyWA`8sax@U>(-*Z+(h`ABl zq0-EU5ers*Dt@tJZ&{0?HpG8fFRBc?9@N9_&fFejQoz2r69y>aHf0gZB1O9DqQWGa zY(}oQ(pNHu-8M{C9xW2f8~yK%I1s!9pkWQ;O?T!{>SdG6*eTc?EZ>sv}NqTBfvja@Z* z69D{-x3=poiZx|Itp`NfDUt_5NS(k#RvUa@$F^MG!+LuKfj_{daedadsplKQY}j1_C~*n@FWxF zq;mKwO2DQDoFcs3+T?|sd`3$@`(q5r!ohI2X&b}U@X_yptc$=;s=WJ%01m~*h!wvU zg?rRT=Zm4)U%plAp=0Nfj{egv>r(G_OT#Zx%`E`)#sm`m$`j zM9mvEXK?QtJYnD0yxUx!VX2me4MutDY}1`Y&KeaUY_&X{%}1|VID6Z1$t(GTa9`oQ z?P#FJx$O6e-r~1&kjdHOuFUT3H#+m3+a#~=d?#4HD`F7S6Ev=FE{jvMJ0IB9HkH5( zGdYXOqULmoCg^;Pk~*ReHy_UXB671{ex@rnGfn1ouVsR>p_xil204pQ^1i0|_O}eX zy|JIuZp>Q(!>+RF4x^c+ivC$(YYsz(0I9vEZAn>!cVNjKOlerC%K3lFVEgC?5Pgz| zXwyan>8BUp*&zcTF`X0)-4a<{)@;4I5#(o3D8z^Kg^@jgWP-$KE$Mqp>Ec$AfjdXB z%7#BBTzvvTj}B7<$x4*&wd+gaI!N=zDf8;}dXJy)b*KB@U^;C;_qxuLp)UrFxzz2Q zgYyqWC%_6$M)dDYdRrybr#vC@!Csr4CERcQCHF@dI(Ed6GmSF#f~7r>`cf==yGg&B>Ruclc#gsvC6CdW;&@BgUN^sU-%*ZYd{h1n>HBYG*5b|z>mf2P0Y z=AGWU}di=x*XF^zY+Vo4R9=IK?UQ0Gf;(LTByAOK!taApMkyOSE;Bx!+K1kOMI% zpLvklI@_&AcE0M!R4i84P3_XsQxD0RsweW+?>x$k?$Mf7$O4g@-#|)3@_6)GBQsUI zhVtOD`Hmt5*mk!&$FrZ?Z1c<5F5MeSs_Ks_s4$AtwJ42JyK1w7m;#9OFm|Bz9Cex! ziSsCaBVAEcEnVeZ&#m)&?@OJHKxglVn0H|tj5S5olqf>aluYA&ut{yHWP-Z8FF2av z_E=o7?GCFmeTlDWW)r2d;niiLxvSa^A}85_jBq6* zo&0nQ-_j#QaM=ZaN}MX6IK?yxCffte=bbSM`%Ul64H63hzfwL5iZi5ypdfO{IU0^} z;SM8#L#^q@mL<3#lLNlF;8wLXGE($YGH|~ zmOq^c?ZX?Z9z8gjj@_EB=RQ#^!;mkd-WUW2Hs@PKB~46Kb=YZltqxDF#p?i-!92-K z%c_!}Zo8}P>bq{2E)ISWOxQQv?&lSyc>^sP&H@3=O6FDbL*&@J2LjSCgb&b#>SnX( zUIziB5)0Ro`S)FF#yr-$N1X|Rr|Fv;HWHoxctnzG;d!lZ?d-IZ+?{bN2(RCr<%=s) ziRkFaVn(|>9+0sR**~JO%gtUb!PdjPOXP)(25b7 z@%YJ;gxf-t5))4QU^is)IN|TQHY_MeTMA2Eig}|$p5Vl3%$Yh3O{KIoY}v;e1Y2l@ z>GCn>+x6t|V936|Qv?IHy2yoqY9JJ(TTq=2>a`0r3ax!ccpY9L-5)&4114Qq369>8 z94&&KA`-+NuMBs{2V>LVLXX|pkFR!j`ls7`f*>_Fm{#41_w(5mKppVuHiI2b#f837 zKl91)(dx(or!|b;Ft02SHO5&AIs%;LmNk>nwdQqa(Y0_wu+2XNvZ#bdFqvd^_l9-}W{1N^W|W^Sbk7m%KzOW1)pUWZfP-K;FS zHJu7-v~G`+WUp%I z`#*0xgxsYNuuwEQa+L}#>EM4?pAy6KEo!Wwnc!hnt7pHPloZQyS_*>?^ZuS9ECWo? zAZ|c>4%z(3a6YETa6*--7$0rs2&7ytwrsq1fmpTiwXhgzwg!=BPJMf1Q>zzPzdKr7 zVYkJS@ET}@7xBk*xD%Y(KyBYuM&!$-!_iC%MTP-B&0As0=C4*r}FJEZRM+XjzI2c*F)Sg zxZP}i)KwKpgScuiaVyav^wW9c>K_)3S9U=EM+~FJa z%4M{(fOgp)96q-b_KR)$kB*r(aXigQ3C&E|QkI&L0@hF=k@cepXzk&iR*W~A_oecf zH_a7C#!nG$VWgPXBW%mBK02cy=M979)Jd&^FwH~FN=!^?AT@6TBR9NFmQd4!<7 z1cqUM=-BRSq{Ucmeg8rGX9v~f_RCV>_{U7=mV6YV(UViavGS#q>0nWzk0Ai6EYEBV1WHm6V%gZBoEBl;k9Eu_sc`YRHx$Dwu;>oVw zTU}nr>wfi{^l$o|ld%CdsE3XtMd9IP{^H&VVa3bl^B&8`W5LmF*zU(mcvd3Rjw$bA z_FgB2dI8!ix9QTBn(%H|W)YQ6wY_UPS#AQ(o@7(boF8vm1$PMl@xFvcdmm-n|C!sF zDoPP-=N5W4R&=(xCpT#2Ge!%X)zL~WMIV!GQd&~dZiTf~z4HeW{YN>De4(dlZPgA3 zHRK4%^q(E}sdCgbe5Rr-h!+f6udy@=t@A2TwX~0Wjbs(StY)lHJIu<^?S=V~R*s0B z*pjc^zvU67liFKz*s*GOT9cJ8&#cYDaTY+|Kjj{S&DzqOhv|G|BWE;^LB z0e&k%NtfT832=6^n0^3X*G^M;p^fVv0wlFuH&|Eg<{PQ7<=db*#&F+*%Q=>e4f z=x6AkGjbJ+8IrU>G#^~T{Cois+kAP^-YQM?q^0xe3NZ8OFD@@J@t zqFyukXD_2f=#qHotW^8S<~Svaes2AiK>p16zW*3-G6#c=nIbr3y3Hx*%_Tnf?Zv0> zpxIV;&g|8f5T{ZSmz(8%il^DbS^&u3_|yJ5$nIF*^1hUkx;)J8Xu8-xPTxIWr(ORe zI_`abv2aHF2mET#`+SbYZf9wDy@2Qs?em-j8F;lgKNAnHAkPzC9-b$xc-*S?X0mUb z%pX0L73JgM6XgBx9GqB!>pCx7N@Am|Nzxqu`V%;l5)1fw%wJIvP98xP9v+tek2{FK zU6u5SpFDc3Wv=%~#mdF}5oQAToDwOP02eR(L`e=yh@TfOttV!Z diff --git a/Designing-Simulations-in-R_files/figure-latex/disc_precision-1.pdf b/Designing-Simulations-in-R_files/figure-latex/disc_precision-1.pdf index 39c4c7d9e33bb1e3ad95b50d020ba75fa060cac0..7eb6b2d2f002618ba32dc50bb5f7b79d19a1bc1e 100644 GIT binary patch delta 6142 zcmaiWXIPV4v$mk96zPie-ULD~A(TxQ5D?HHQUgR8Ws z=@7bzNGCLD63Pkg{q66)&ims$zn(Q~?pd>D%{?=h`l|+LF5>Ib(z0)aIg>SsDUt4t z57<+mnvo@4BbKm6Tob*aLamuZ6l#*9L39?>vy@%(SI#!p)4=TcBPy=%Ik6>y*|D*G z0hn0~26uFhL$Y6tUCv{n7Uu@@xVCEC>B7Q=Cw^zvitXaO`115n7L%Z?2+KEj`?X`h z?s)Op$KcO4{AXl|_grxB32w)A6NyWpn#Gy@IzIpOA~^Wx?#Io@Sy_tV(6S(4g4ui; zqB+iG`gpe9O&32WVD(aY?jlgJt0}ucN8JU}236Rj-YkeLSv8t*5ej<0?2UBrdY={b z9ol@pJKSxQFk9ansZSGNQmDx^=Zv+7tqz#r+y(9rA51=j#_0u-cS-xMz zi_G3#B(Qk1+~u+P4(+Y?irn|3o4ZjZef{qeM$!}o`jO%4rWMx&f$^brnF6|)>371c zx$lG-XrUd^4ivJVUge~f?}R$Y$&RU zESJ%LeCjEbzrL0tgneBdXriCO8+W@rM5!yhOv3eBDmml$_0nWocj~eUbyK}#vG0Wo z1m7(lrXWd#@;%Jsz4C>JxM-mh)nzKC6Ee1ZEPOnDp%dDqa<{Qg`NG;fgYSfCuz< zF>)>4_N_pFO^&*0HF+8Tx>^__(cJuD3dz3*sGjW?7B;Enrl>op<_nu7qONXYoM!Ks zQ5^=V`}7C1YH8iA@`e<_7;j`Tj}52kEFJW1rWj9?v*H+JQxe)inM5qpgG)J-YxeB( zf3QjsVT1J(Jc3e;Mt)(`X8iMA{6m3ke{&)uI3YEq4b#4qwA`q00`!CR2wn)~i_g(- zs+;EU^7kc$3*+h6)Z{de1o_7Qz0xubRC_5&gN{F0ISM&UcmOmPQ_u1<{=WS(CBrmf zVU*!;PtlMrd^%rOMbXvEh?r7vvKN5*j&32Do)%-$*nUB{)iau$C3qPk4O*O-4SjoR zINI|;B+2~Il>_?%Jt}ey^(AQ7Mc*J#RvR4SuWy=WMiWN>&_RE_7qmAmji+9>9b4sX zM8dX=j1tVatK>CxJ`AItu=ESOfyOO6yn$vU)(9JmDSXnaQtGSTN|A+{YWM*i3Y_@v z*BWIA+{Cq0>JZ;cs#7$?e&{Ee`Z>NFT4eDsOhYJ?SnL)q@bJW8Bk*uMM0)MD7FJh0 ztu7g)E+>@nIlRlH;u<$FUUe0qtuZF>8tL5sJpU;jt<^vIwwKCYMQ0VKwo21r`xmLm zNgn}#mgoMRZGqCTa_UJhwyJ;h2`qkl(x*%Ox-vphp~2R-IM07rE~FjwTc2~3@qvtm zK^JM>O{>Pl6EZM?LoAzJCYGHK19`7`5N%Wc5oh;TIPyQt^Xh`%4|w1q<16J&SBQk0 zexJ=T@@3p|s zRE5p2p_PROwaHjH#si7HySPEEF4YHRd0{WcQk2l8D+eP-eG)n+$Ou*sqO7GsLG0o2 zIu~Dbe&_vaMW*u2j{YuoSIiFle8LI+oI3HB%|6ez>#<*3}TF z4~CX1#cyH!&QBVV{=(-E@W($eTYVepRw7@PSLYv##;uB27KAS#jziZG(xy$U^+V>a z;ao?VbK7c+J8&Uq_x^cdmYHO;0hOLpK)7k?lT>Fe9zD!3W*hjUf=;n0PP-nw<#J1G|<&OskQ>CJIB9AsAv;8JiO&OYB@Rm zGb!wE*6DXFW?}#`HiD+J7mf~@Y?Q>aWk+50di(YvZWu{GkG{u}X*3P}y>E)nnQVQv z%wNVqP5$%;q5nans9dDRmwa-*hjpT=W=OQSRn>BV=|w+Suy@Nn5~l*C{*T~h%|`-Gb) zH744;PfDwK?0yx%Z#HA4y9&mQwq0vUl{=>T8kgrbLnDX=ZfC4~G)i=IRSs&EpNO6C zK@@YPOAL%`OWhPky_#4XrT__DYu@pWw;r%bK->)c^moL@vQ2;z1KJ9kUTMhr^Sjrb zxh9!!Kfa68x9w7866T70af|?>n{)e_b760Xs%yCc)`&UD0l#p*F>_@Y_N7ykx{u)- z-Ha6nA13a5qfxVB<6EaT-01licV=Y-$5{Rj^dp3&MW0l0>R8N5lT^S+HbLL|RvZy| z@9dydapg5YCLIG_1AzoUGYeSX$a_Zz=;J<0q#1*>pU1YipS%a2!v%}IYXdekg$sQx zn3<4Z>&^8ak*^FtR-xRc{!TOFgcm`7lB|Dn0*T%&9D3^FCi<}L<>h|pfvF2EX%GuI zF)gTQNhZfKL!|l>?m`~ZytT}(-6|`>!eDyX^bQ>GnzI@yEcol}_33uap+mz&2wQJ? znTXpx*TYN)@6zzv%=vAd`9<)jl4X>?)Y*ntw&Jg+6}Vp>F_U9sOHK3VeS@%NA0^xh z^oM=_$gcc@zLDLg%3$q3Et}su04l57C+B(xy7&EOoLKOHWe2|-TU@u&#HEY(I!uLM zCk2n9x-89+w(qsUA;D4slX%mRa8LEP(9Ap%B^&4tHu(Yn4>o?&lcL`Z>L#!DW>aSn z*+NaxbR-m>8Rd^wGcpqHiL9lbFW-P8x>;984}(5uFB z#s59wq+wWUtc)XHfHJ|be}c&_J~yZq;WUCw>uL$(VJ~U%~q?VU51&re%`l) z>Zl0OGD*_H?4O3m@NVYF(OV%S zn(kxE^xfN+6Bi!)-*gchX_2ZH!^CWRLM%O3jfy}sZwI5i2%Pma+3I5J{pRkkQY{BK zRgoYCagh*0H};l)J!AXvJ(6Vy@#TC_y2Sj*wkILd(tv2mJP^*Gj;`s`7!!{tpB^de z*2Be@10KpcYdRnCsG^O!rH5;o$Oqhc&I&|Ys<%K!a@Ioqb-sD2bgdo`ljR7{>QCh5 zyDpj84)IzZ$fRM2xsmE*aae^OsRf=V&)N8DLAjJf+a%BH8Kkpz%1>&MOWWkH5FL;Y zQl8?i|Jg&?Sw*R1|Mf!P?W?!zGq+bjm9>myLbKu-HPLLzjRGg)8LyBN*NFk)pj9|? zOwf}jic(3ATTV)TCDo%^X>g&Feh+Gyv|lCY(qCQWr|K{-Xk3C4x7Xj}~vvX!=W@SkqA ziPmTR0K>zmozBE}4wb^4O!tfuV6QV*R*kJoX`j?uh&~8&8&fwI$+ejFCV)EZ;F}t< zDzve+m^2QKbcx0e%6;Or(e67bncf@G9NTb6f1455)`_xs_5R^eV4^LvfvtnwoT zMlvS@YAbLwcfj?uq`J3LgWENPUcE7%P%kuT^UwGZ{IrAd(2730-!=Rj1KD7(E%N#M z_nIBPYt6eR7vSzOqI;}K_DG>6l7&-S#)ZH{FmFQZa@)#*iA1g&`$ZsVsYhlj;kADT z5>YSj%D#=MoSDQ>EhwMue_GNSVrajV9=c5WiSc>a>Ng_GXhL@ne{;e+1VLjae32{Z$=2Xm^@VDiu_&9zj%jXTj~}V(TBVH8GBCYqyc*>GPlM z(u1JQe2iUo$H%1*pS}6jH$mj@TA*C(*bh`E5gCQR73_jk>Onwo5T1Vsb*6c8(H{Ga zbv22d#HP&RBMUtR?G|-%3mZ72fMDx*TyP7}dTE{{8Qs$EcWC~=U!=U%5pcJ>AlEHZ zSEDU*^0E=(_;sr2AdY(vg)DG)C=y{j+Mwc18?rRNnK})4vOc^ozlN2^8?Ehn*awd= zz4wmN0xJdS8sZ1a?Z<{jH%6fo&eBW7YF~a{A}(5NJnLPy1=79v6Q9}~lh+ko?Bf#> zKpS_Jj6Pkao%L?pTV?M&S1}P?IN$$Sd{pz=iSYI>2RA7W;dZ44_(tYK96mOs&W5lI`)at~NU(NwCiO$_uD zHj7G#QV;q)aw3+Q06rI>ozuWu;-DZrKNnYO;Bhv9Gh_B$t6 zW4|<-S;Ssn;_6;rPj?-PC(qQ0hGd|QW3w$M`{I!Z&N$L?>@s04)bh9!?OJ=od@Py7 z%;jU3PA=MCsvY&rN}1kvV%U-;n%}J>Xz=lxn@9HK=y=m?=KzzoBH==}KyV|8%*Uh7)sXjCCS$NHFEjuwb z(R&&8oKEmFZ-W{WY339@pDy-X{<_@rxQCPT*fTra!?#_Y)jdAMm9k%+E*T&%Pkjh% zYi;X?D1#jVmn&Z{cI)g`FqOdbcpe zLp-4z0_yS-ViMBg|L!FR-vEufue$xN%v?wprf2@pnO$DSm1fD>jaXfPLhH?;YC@IL{><;mCKtLfJ z-cUHg(an`Z1|SKLP9EdcxGpIplg!KqL`q4DbBJAi9IhN>zkdiJla~1}nFLVozhsg? zssEBmN&L5-j12I9>j9H=fOZ}Nx9@DJ|#<485vDkpo+MHc<<&6IB>NvK&&Lk{ delta 6042 zcmaiYXH-*N)3zvrfFLM>O7BE4)BvFh(j%ZK0i^d{LJ6Iy2#QDz1R)fWrqV($p(#y* z^eR<42noHnH*nwUS?_wEwZ8BCIBW0OGiUamea$tq&Z5qlHiq-^6$xPx8!TBtAWr)64Xy~k{o(Y}t*QFzy0hhscOKaJ z28FuZ1N|vEt5bshS>dSP(ZRG&LxW2tLP0)$ta5Yh=&faN@X zj2aW%3u;KH8$W_@9g`kU?=G&8R2+fP%Kbh^MvG!MFmiq`;ge1c3DSguNme$0Y$fMs zUqB@r>tSu>EcAWg5pqe>mOqq^s@pijZ8H*vv;=PTT6 zPpeXPKlUkQ+b!sl)FJBxq=J8*qaO>kiKKYMs|T{PS1`EIAm~0h^3Q8AtiE z$r#JCIHdA*N1x?P)IR<*R?vwEkSl!JKz~%1S_vPIkk!0BQ44$haotpYR5G12C&uDF zAB;XWS{lHKL20Mj1>g1kU1dJ?1ERUX{Vg1dD>s}p)4im}t%h4Jl zWe_zVwWGQ{ro=dkzv|F~`cQ9NG6gfK)1W-13U=>i)i)CyR0|XR<9x$Jr>v*IJja-F zR^;a;%>*7PG>9h;kwj|uS~_mWpV`9YhtR7K^MX+|lj7g^$XsOzi{^R25w(e20ouq} z6wz&o9uuA2@3#D;iqQ&ND?kCo8o5!@_N2UAeyRPGOB^j#`{=IDBQb(e17?CjU`(Q2 zfqDwV1vK{@KNMsm&}dtB+_=}bd+)f0 zduIrC$KA4NS3(6*ZgVhm9ns%%y!G!E-yFUv$%3_i4o9}LDI=DYJ`4T>Nh%vZXgWWH z0oBL9l)1DOXRPT3_|jIOt=Y&iFa10FIderNy9`=2^XJ54_4um-KRk`2ygN%)>ElZC zpDAb_j+CjyMVyCCg>_!)8u>88ZB(b(?b_?gj4B*sukz&5!l%@tG*&I*O9n$GxAD^- zun@#FM?=@8Y767oNDydLkIEFTg6Q^wG-nWpL3zF3D8h?~I0!ScG}*^RkAxR3WNcj< z7ny&<5ICU|Y?4LZa|ZGm3}CFJ(x3IPZFJAU1C2cd0ijVxYIo z+T-*bJ;#IZl>6nKTMTBB{9|E!(+rP>1FEbTRi%FlUM$_EkYcPIY}RkeVQ}A6SL1di zNLx9Eyv?mO{k0FwL2;r%J{qmu@z z6jO+Q5)|B0b1HAvto-!(oktaFnci}~N&8LsU6TRRa$0usE=fi+h0wE>LH)7Exq3Cp zu3|Mdd#(YiIV~!n5#DieIyyrd)__$XPM?nI@@^kw^vo>Zvtq<&QbM} zCFo>r(}?N)r*#3NZJ2e##7Zk$UQ-{UsN_^Vc%{%e_+aq0aq#48X$M)iol(K#G%Ta0 z0yH9f%;BhG9{RaKUQ_@w?3 zad}(<;tl`E^9I9~7Jr%QtC+%dE$pzrMfS6!!3vF2hy5@X7zuioj8S8yrYs0?Q^B^g zEE$>gxzn?)-_roSDJGiow+<38*rF=P<20r~tu_2e(HuLNGD0*KT*t#(=SO&4Wg#HT zAD07vB^SOG!INzXZ>KPHc(32deI(q~Q9pH5i;1}x&64r8CgqA#1bFm+F`+L5`1`NO zxs8t?)WKhv-9Dlte!_|CNW1-*g6S9hH?|*)iJ@7L&k(WXg-Xy&oT*#MI5})<$;@1q zXp6Lq9iNlc2#hwg;v4C685`IwGwf^a`Wx%2YD9g6nMgL!9FkB7@~=>64J{bD!N^mb zzQhD?zSwU>Ph3<2a@T6;c?ShedQ0DrLO-!DL)=>kpXX@4GD-BKW~CZiY7m3l^nl z$tDRv++tn*&`}H-7nnZiN~h2nD(w=DiOMnyA_G*%Q$)l0#4S0>)n&eZO(VNx^owH* z7z^){TzyA%;UcZv`Uf%xq?QBU=6Ns?+5yJ9@jCj>Dx6+^{I2GSqRzk?ScG^kEy1ANhl*vr6 zl5gso+as}*9}j)8EdA=fybs=x%Up92+#a6YSJdj?BYCw$Vll?cgUd{Xf#KmtJ3Hnc z(0rPi>d zXzUzCM5=zAok?HZ+;>8Xs$dpBZ)lDbpXzI0R`9F2v>=|3i?Eif&QV7iar#zu#_jQ4 zew*5Rd!#Ui1%D2=>J!ZwddRYWhqBg&BuC>YHr@V+I5`h<8xSUxvdA~6?6)Y6WTU=~ zXz7urgH3R%%OZxV~MJ=75AQVkqkz>H^zypVf*hC1O3L578YPa zVi?cAK?zcOI+NiUZliPn3^6_nwYo75t_nf*(u2P`eLQnH6Gc0F3F2ge*&%vMJ&$Ds zE@9W;QRbbN(Vk?1ORz_jZ58LYR0!ZwgW0oIH|*s(q|W9Fd!S7O~LW5_}=HSIZ(| z%Ji8`(TDi~?KlQ2A?Dy0+6~6IuPpeq8agU~X@6L2`t1?8^ue?_y_@4Q(?9n-Jdk0? zX3KGi8SU~6UGjW3F-Fy!-gvgDwEf$8^O_4Y^d0KN=Fe`bX>37BDgN{jUbSk&rco2Y z@BxvR4>wUq*Ok#4qT&Kil}+|Cz3r<8Nz&R2@&RrPt~)R6;dxifP!j~r-_y`0`<-Xr z_WOe?k^!4}%rcv^(4x%THgD`v0?Th4A#ikNel{AFG3_l8Pkc=s>k8M=$g~@q+E}<% zBe9Y5J7t|!I}!02^K_U1W`p{Hl-4T-qlBIR#zGN=*lS^ za~O8tCCC}+d`XE3NHL#A!<&haV|vbAu!7qK9Df8&$O|I-Wv@G-ZcTFU`S_kv9yzEz zBr?L6fG4vuQpH`0&RFG%ka~*?bRP)*lyE+fvyIeBs__e+mNf5QZMpgzasMlSVR>?$ zh?AsXR^+0SqxE6r0}NUVUV)E8j0R!Dh)EkcmY#h`ef>$B)zIo;O|5_GaA6k8C|?%& zE8XmB#)MlqEyQ76)6u>LnA#{Yv6P8#9S{p_iaG}vk(8Bl#QP?#hDOS>?mKiZtjXGh zk{jtw@_h05)1qZqieL+V#z5@xDiEXMxhA6I&u?7af^YOT)7DtMc)&a3@zeKuonghR z0yd=OUy>XebbTLcSNKa8=@vI2UYvBrb5XXw7m}*7lUqO$HV|Vy1CgEC0dGqKm*&RzNkBOqA6?J zNz>N#nnS9njiOh|rnF`t4oL5$K_1B2s-U9aIv*bun2fAqNuTVq3sSVHns-KY_1+%N zJuTZh41uhhbw?eS?Agi1VqND8gyjK^rhePeIeYawrS9&i(G+?4(R(Ohbc>)GMSSYhM# z3eq^ASQzUn4Wr(;3Q|n$vu7|tl!l}DU_xGt4>7#kmB+$nY1iVjF<`fd*EV% z#3PG&X6t*;sJT{r;fg(Glk*-PN@1=iI`SImh??M5E7t>864>6p1(s1PNx*{k{pB z=+14F;Q3;YHF}Dw4dY|;mvYMmY z-dBw62(D{k+f;~W>71|KP4KL)Fakyx7iy(0Jq4hTb2vP~Be8WPkoHk+XS+QET&Rom z4v`qj@BGU~W_tFjCyb#Bh#Dh~{UoK*sSi+1$EW;hhlHbqhPNAm=6Y{uGKm9+LG*D)i9JCr8?(fhb1!1Z5iLNX<9Io`4QaC=Gh+y3dO>a+Z zIRgjn0T$iXWP>1?ZgPc_Ox1hxN6kVfPlDSvbpAHzNFf5w^iB}j@3{7+>bIvB)AqC3 z4(A%Ww>#{DXLwEmf6ZQL06aZbGpY|54K7C&HKe;uHSBWpm5a{4)h=A{5qi-0x6fwB z4c_~H%`Y?|T>2O7aL-Qq2j|CvCtaB|gq^+8N2@y$C)H6kKw6i*XN@r5-)|aA>gd5b zI*gYOEYoxAwi{v)-jM!{HL+KyqQ;-Xe{L@(q7|#gXlGX~lgWuaqm7TqZyO=OVeXKR ztEWJzRbcRjpY@c%l3S8`GR@MJdJA3$Q{z^2PFye}EFQ6LjB_hyWBb?=_`Q0rE%A;M zqY(M+5K@_DYqz(0w;YJylAEByMwkTy+pBS=(slkoy*h-_@T7R+x1rN+$0 zys4#$m#j=PQGnz@67`BMby8gy={`$5Hd0$$CN)sajW5qM9!h-(4W9@dh&t?Gn^eGm z(cf36K7$}O`LO9h)d>4Rv%0{&l|&tUbWV-;sGHvEE?W6*0>HYRjofuBuo(DlYdyX7 zj{l%p(-}S*aom5SyE|7$IbhxHSDAG5lw@CnQ&q##O=)%CX3{o!=exA^Om7UJL=R0B;fd{`+!LiTRh@N7iQEOt=ja6EMPq zM0~LD6P*#h_Irhif3*6*KP{;117NL(FVFX(we(tR-}Jh!ozK&QJK}xoHOIx%L4^5J zyC1&G>zpz9%P-1o7*|ZDZ$A2?%W68hVZvlSiqdBv z7qLxR`D+|r%{*Gou1@?ws5Wwm4Oy9KYu&AuOiHJi3AKt4jdwR$tP0?ptdmAt)UJAA$ zv$OHG;pDj7S*Efm_;`v9aX2>I&`@`@+ks8ly^^`GSwz4$G@y_;d|tW&mHU3nsob%G z{edMGdB`ZnBP;?EmJ$&bmIP1=X}MV|TX@;9^D9e>3X6h7{LB6`PT1s-y zhKfWAX~)AS_@Ctm!XQ3jVZQ(87ixG6`K8$TpW7I)KX!oIu%9~susgY1c-ec}Sg^xw zJUkuTT-han;y@9k46n)+Q3+9G2rn2RCMwA;eEzVzvQu6Rge63Yv;Qj-6Bqq2nYftL zf6FBQTMr}w`oDT$F)8tL8SVde3>Fs?M?(1IO~I1Fs$!2s#Z{D5#8s5QqEZqdkeG@j tSXfE%k+O=Sh&;RO|5wF?U0$Bj)62rc%h$ukmJ%!h22t|xsOYFt{trwXv5Wu! diff --git a/Designing-Simulations-in-R_files/figure-latex/swan_example_setup-1.pdf b/Designing-Simulations-in-R_files/figure-latex/swan_example_setup-1.pdf index 0d4ea993c6249c3452d35cbb28c07ad25933599d..dd3f2f9623af0e04ee9cae2c32be3b4b152712cc 100644 GIT binary patch delta 14094 zcmaib2|SeB|34{}P)dbnsN60}wy_Ncg{DL*iR`j(jeQx*ldDj~m5?QpE$b+rT=rcd$H^OXFq!WhTS7mpwqqpI*LaDysoNGb% zgJ-q(2T3-pqcw(tbL$klShk?*Dh@J9Sd`uW$zk&9Jc}k(n0?i3es?C%S#Tq)mudOb zZv-Jo2p@?#dmoMEZ4Ke(9rLqpf;2KH*!< zpsHOh>HCZB_tBg+BX&@NpN6O6v`vKFnjso%wY?DB%0W{3NR;+CoTVAfxybgl%J)`yc};HsRskO0E` z0G6#H*vDbmOuR)Hp-i@wIbe=uJONjssSNh64)z+TE3+{=jku?YU{%BhZKBXvwh);E zEKsH0J<~DpF5wK^$gNyup2Ls~qXJ#3_3*gYSk z7%@1mVWAo`~(m07>atqlZ$J{RN$KHbc>6NaCBD%v9!aQSv)i36XSZH&!rN6g?i_cXQ9 zY?%;(K|Q&#GVOwgry|@@oDJynZ{5EV=SaR_9|QLi8V~m=#GDD5fTuVc$I;?wpz{3x zq0+ukXe`6Vm>X6@{}{FiXy38Vw6PT^+=3NC|3#x7(3tdpY4jlIE~kxB^m+`Wrk6I) ztQ=Rx+2UHd!xxAO#I3Coz0X>RFRSm9YB<8QhD==2!?6|{251ou-xeG5p2Qt@Y|?_g zPylyMeOjej&yFs?hfj4<#h_s6M4TV7x+*?G1FiY&85Z@7K?QmKcM7k`FVDUS?>PI}Qi-Ygx)sy7NRr?9x!ePbY^smO zFNXNQCuQVm$Q;vr#X%G42!AwYc{Q_sIr7SYp_x?_ zckGn`cIeZcpyP#ojB9eq>)vY7ZUuA~uU{o0XM(0j&O7j*>6tni?2@^{{-EiJ!#w>s4-JTUK<-83Eoh&Q;?`=n9eIbYYryg+vp76@mC-_y(tN3)Sh zh-XvA$&(S2im_v@!0W(z2F~&~zu}r>1IvEs$FeU>rC|}CDtjGokd^%YwR$#Vv8RJa z<(8|;M0qB@sX)q&qC@azEWWoN=_1$c64LA~7$xYQM0y9>uSL=j z8Lcf9yD&4s=+9FE`y++mi$BfzrFYExhv;#1o+)F!M4eY0xUwi_9n)*>bzb-Oqg`}* zDiJU7RJja~)v=XnakCcO#S(0~4396#8O$ne%Bd&uJ^Z_02A==1aY(4{S5q&PetkiC z0=Rt3i%j7}P)|{{5ZvBqlXEaWPaP6OdjfL2rHAWYO4Y%+`@bYFWY`w0ZDs0xX4M-b zU3APad?$kBm}-qJs=03wb$DpGGIs6FyXZYNBkz-9M8j0e+38bN{=DaY zfn#+gW7Ob5?8#`Bi}OD52X6#0x>h#Ra9!~(9x(Gd7BAB)z}7$fEhzcnI7Ft(sa2JQ za!M_!3Ws=37xsxzHOw&|X_#Sq*y}>)*Xzt6YbmwQ@9uZQj+N3}?2GCbnNud`9aBIA zjHoq6&hx1P1B5A=n!qo0OnM-_79i2$jOI9CJ@;b%c;D-*t`60w0!zdMBL{g#5-B>W z?3lUe=9a|=tj}VAf*7FSx)fxZlT)m9C7v^8&zm6~n$Jg?Sc#(^F`Ki@M>eO~9y)zj z{X{YT=IA|rKG$HRBGSZGM>#t`-9)1xHR+!AomSq=6my^J7K~w>zCHrkYo#XzdF!vz zqMjo~?3dVP&#}5+qt#o}DGT=+nCjo0+MIev2^bU}eJz(rK9CprSgWA|dSg)i#>C)O zc9xm+Jq?E}jkY6sOxm@ie$C$*YpnOO4T|nzu&sRP1^#^U4P@oI*1TLPv4jD4W*Ar)N9F5<$22LgVDJ&aZfM+$xS+UpwXu9=MZncM zxI^$SxW=1gS9~hoC-_2PW2-`?Zd{Bc{PBNo3ht4I55xxgsK^Z+jY>H+NEhX0>(x+h z=h0A>Uu>eUtz)01t&I}M%wS~+|8k=ZE1cj@&C+tjhn`*zDRPuJq1N9}Hk)*&*!%#F zkQ$<&oqY2T!)!%yrz~+%0V_5ZQ6#S@1AP=NcED#aj7H9uR31^7QrPl=`tN~tCvK~$Y|4A{r^Sx9N3T>`s_awfGC_8T zGUvLb9vLl5_eLzfxK$tU^{m}&p~O@4I8VaMh#1AE5rR{vQ9ftE1?|VQGouW@Twhyy7Xa8xpafvvuew-9CfcL zs9GB8F59XpO_giPuoqa)Q4%!yz*u9lm&*|KM!DvSJo4EyKVh~grs!u4exj$Mm=J-W zM1W_weD91H^%zw}PWl53%NP~T%tS)p*IX{5(vk<@wJR&loatROXPA&mib9oLphg~s z_nAPAU7$<-x-$L6vtbLovy>_u=!RS%s8!>N1tR;Z9Nn;C8WMU?lw$sX>7CV>5t+Ep z`~6h!?fu56py(G3Qn}Z@I^xRTek;styY)4oKi%p|&b_aWP({|aLYa1huS4Hm){Hb7 z(JzxpktifdWxb-1<49}CP0+Q6QV5^tz-!2GFJXBv_oA`-)B`L%25d!cP{XcKN+IBdo%Zed9X@!4(F zhXo(ykCF?2=kAQ~V2<+X4WE}Z8kacUOec_?muvu zv01**&{RKwTL9_3&w;@nfMC2AGrMJ_UBl?TANU*0+W3#=H3Sr<>53$(D^GwQxcvou zD>tB~%GUORR^;&o%w*udVB1ev2_WF&TggHp<`8OHW(W=t@zdk~goZ#ds0E|G3B|y| zG&p|zmrx@7zqGVCL~`smookh1zeDBrbvf(syRZC5dH3gNKX-6=shII#eCh6_r2X&E zmd5oDL*qA7Yf)q0+e4+&GXAB^Z2RS zVQ33pUH89$t%RrbRj_YbiT#Ga_X_)sNwt{vyM)M=w4_Dg8WxK1t@N>EPIVAVtI*Oq z6WN+zO5ibrws9~3h(OuGbN>n*QpqG0qR4a_nof6;dP@b*)lX<1idOLj4ca5@H&&dk zmVSPhgK1e9A|4jYDqT`vygKy|xA?r}A?}Jzg&1e&Bb>y5A8xt2(+>!0fFJ;rb)ftc zl%GI3Zm=jnWpjU7NmYJPUTZHtiRUc-&BZ;KR%Rwl^9e5~#{8*YjG4b|ldyukN{+&7 z)5x=LxH`{%Mk%8j;<-9~BIEy(Ff|{uW*UzGVDP44Y|NzR%INv=h!0L9^AK!GeYNpEz|JhEI zd|>X9M2Zzw`YOt#o-Tfb~D#fO=M6VKm3CdPraS*0wqw~hZ4~@55F##yHyS=u{wT`z3+>u`z<)`%;%2&>Tc6uMlZvUV!w(z(CU<82z zj2l~-8+(ddy5a*1r9^c5HTzc5DdE@k^)u z2^XlH<`@)C_T6-tLq6jxi5nk0^^~t9MfY>HvvA?}5ToOI5SFY^e@R8Bs(!Iep&pJq zJJte&g$ylD8Pzn`l}}B+vCxWHP$wWAZ!P`Z>d8(YFh%6>c7?@G)sxLmDmwCANlzdC zvNWob7DTu%)5$lDx*O}sPYfTo4pjwx>FfJ`L1GTU9q40xiW_2Vd@|T?{YKo{CP7@1 zV4Wb&DXtpLWSs=cJD_|EN;OdK5h|QU1lK3I)A=R2G03V&?~`vL(!M?yyXVb_F3sm) z73yYN0<~FP@3J-0zj*qxV-?VE2=v#$5+hhrC(cH>xE<6a zePZpK2$l8s=RD44qtD8f5-yVeJfm*hTcU}V;a^hc(1fG*Be$kD3 zczAT_yp5H9heK9}@ZW)vk21qy>&QzE#UAAWP)e&-l`?~ge|}1NbAHMt)rEsLY2IJY zK5(=Ve&7gDlLX7t>v*3Q%UYL03$095BG_8tRV&;f7)`*pYbAjAq^62*XVzV+L$iHj zy^1B6_|Hz7xri|9y4B1zr+qyO0~OFD4!Ck#V6I!ud~>d@FT5~(DxAm=*F)=Uh214i zi4pXzztog3G5Xq68)^TVY`}^Mi9L)K(2{2?l*S0PqS6T_NroDF4NA3H zC6TNJHyQ%s>H?DL;R|G0N%dYyIX!~uLAgObOXw~P&6b%KL9Q#%2i`kwb~p)JouUKE zbCdEr-a?1iwAF+K_&C^D)DI(h^-wfJopD~a6QA2mY)3wq+YQVbt09RW&pz&zNR{aP z_I7eyqjVoU$0Cmmb)w3_lz&%KLn2o0I0_Kdy$oq8lf0{wybM~)vdk^AvnKT4t-enU z%rhGAd?co~c*n8$T%{FLo)JKzM)1?EO2~>y%;II|Au$VaHZhCnl*v?bU}|P(*wh_3 zg2m0I@-R5vacob^B#y&4?^b(rTBfpDR&r{*dbWa#qcA+V7+_&!=OJByCQVZ%@RXD% zW<{WaHAqW$woFQ<3RPE;D|F{F8ykX(hOEZD(AoQ6Z7Uo}(Nx zMq`y)Ju6lH{YH{K43t$BJqG|NuJLfk?n>5kgIx8vNh%S&i=!TA!tB|_h$}h#`l~#x zm7dUal#mX9unul(5A;|4L%t^0-dT3SB-@nelE$1JF80yL`DzIoj*Cynt0Hg#x=Pkf9&)cHEmA#J9eYGqL^eN{&H0>;_ zaue(vLBvSSi}23yllg#+X`aO!mAkgLg6>-l;mzcqsSa%Ey)4}kqnb&z7JipsI}q4! zeu8C8R)6s%Phdz8g8Rs^C1K)?5xPeyvbn7#MIJHPwFne7G~;meW{Tzj38YhwuEROF1O7xxfXnVBf06JEXUPyKq`{AF>{kb?XSQQ`If z*>3tE7085+QBC&V1`4I-4^q_|y6ute(VE@+5M)POR04mfQ0dFVqF7iY)6PZpdZqUp z=yBz7i<`>&<;7n%xaw)NzIfj0-^A4H2JCAX^N3r7dRt455LcroK;4MIUiVLiz+U#| zsHO-$;>wGO@rGo-abn(q#nDnQ9oipI*+|>=f4~gk%qBp&H37{*dm7MEfq10^#ZTWRtq}io5ivZ0VuVvYW%NhcH6>F(`ou?--t#9f$(~P&R=N zstf;mb{Co1^_vaLwhkc{>)#kL;!Sjg_T!Oo)3Tjt;p`zjTxP|Dwh^{_Jz>j;A9aA= zckXU@FdPN2Y&RPxXRanMb_ynE<3o2h@W0UAZep^mH1~)c13igeOeM$dx*k}#J2U`r z0}wXbnaS+R7;np*?k6e56jsSYNt|*CO^clrTm;@OHMq< zMu8Hr`A@&3+2nia9j@jKFyu%TV)2gi&IAh{KPJ?w)^kSB#2?lQsL%4LR7ntc)Sg49E-{MvR)uM2mxXCN`&(H$_Tf|vfwR{DXBVcuGHXr`_Jbb+VL9pPhi_5Cw%+n| zey2`bP)!G!QRi=aQ5R#*?yI647NK&+COphvkE87$^DU z03j4u_+IhB-kGfF^_Ob3>%l?6lQ$y!?OsLFC`Zsh;7=}?M%i3T72ZXbep7X!L?9Ij zm~6QTHGhF6~N(EbR&~_4<*|FY@WLv5@(Zyy5&s=SlfV=D8=s zh4t>}IB z4?y#I(Xg1#r+M$j9bGCFRg6D&X()a0>C&LDw&o42vF7y|6b;M#bW|*_IP-lvdi-s% z1>>-U$o|L{Cr}qLt{DhJ$?+842v(xPXuomjIk5k@3hoCwQSoF06VTVroI_yJQ+b1F zvK5B=9d|16gGy?rKy3vi`|| zPCy@$-zTW6`j)4q4D1{29KKOMdU;I*JSI_B&{z_wCWm0@mmv@2%d1kAb? z==v-&Qv=#^-Mt_=`G7ten;yhv2pX?Y{I{-3XIaH6-dZ^!75ZlS;!+B8J!h<9xm0fQ z?Ka{*mBhDh*FDbfhJ>|$L?uY3;wEbDJhl1W^=v8&|W|D}cp5s^F z_Z=TINXyxxu7#8Y|5q$pIui#|;gj>-Q|^TJzn`OfHrW^_y@3z60m=ZR>CU;=E?u`& zuasDxc`s+6UHQ;2@OtM?hv-xp$VcE1ZT^m8=0b^5xK6+uYI4JQESz8oaGQ&C2%CHVDaqc#Bls(5oV@s zaPzr^sipDOGfoQRHmaIVQJ7MIuUdwU;Sq0$K6D=sgw?WIp+&~00^#0$D*rB}Cf zCKew2QU~oS+fh9!|LT(ZXgcCwtnto=1C#Mn^e&F39WeL?+YTiarQc;O2=i?qIES6+ z;!j7Vs(bh-!!ri+tMnF%-WW$(;^aj*MO}L@;jP=iD*9`}fJg047?deNqpPQG`Z*0f z^W&uAr>pIRW)>+`)6b#c-9JBw0KYHH`l_1VQC|a}P3+_|;8)(A#2unq;@1CcSDdG| z@whRA0v(TqEGhe_l(?7>@1q#Oi3dJOnzq%4=a#h!I<%3qSQ1JWX1z>RmS z1Sb_@arw%Q*covJd0f82Yv<^*Z^XLKewKW|G|m8Uejvd4J^<&}0-Rq7aQ+Z86LA3G zd|QC?ZTw|t!~xF#ukCtNk4^Uek^FlbF%c+q9acT|vUa)cQDL6;P|86^T*3TbCf!`& zk`jrVR#{3pIL?slaRy%tNZQ@r0YNJk4G3B!K7B0zXEyzt&EMk0ijUwX%7Z;qP3;$d6%g%I2ip@(6;doi95O zZ6~qY5wj|k_A2Gdm-gOVoGKc%dgG7F_2Vl^`nE$&0)|qOO=ly<{AF>W)8a&iHD=~{ zu1F#!W+EylBkyY0^SfPtPCFFa{c=R*gB^j21Q3zcA&jJ@zJkjc*mn(#|e!8@aUuq z7|W0|ysZ0XSTpN4LS%SPoq1w^{F3$)d*RF~G@JS$??v@N#Ip;wfigPBin|<uv#?QFg6Pw4zIl@zFU zKPLL?#0dUQ$T4ZpJN^i5IY#Jbt#+==!6=#s;|Ni(gq$urqd zRAW|hH*5-ee!6rUfE@C4+HXgH2o)b{HP2iY52vE5uIBK(a)I*(R<|8_bY!82T zqx1V6&mbEFsvQQ^szJ5Kpqc?-RWAX?Hw!SnuL0vr)}mS)qt|!qQSKXK{9|Ih`_km1 zM-mj+x)M<+>QOp<`DKZobDV@nEK_tzli%MZ91RO$+v6F@8`zm*+Seaq7W6d7E^as9 z0}9YULHgK6c#SIwT5JcLas%L$C(+x9*1hu_I!n>LEh4yyXGwTKO1lD58Z#n;9~Id) zOM%SSr1^3quK7Z?8GyNM1Bcq?;85GF^$7VZeE`V{gF0$UX|UqwBQ6D8s%oxt7wTw3 zF#vdc?Q%qI0&yP#SRRYrLk)RhSg>kB5Fp=V`*55yTy$kFMlX#VTIu`rFl8V!qL7nJ5rE=Dw~6}ZaRYDYL|&We5B#r zpqW7NMnk2VzUc7?hC0OfHGDn%y-WhYmf&le7U15%IaZX^9#Jx%cxCm72p-98sFHT?)XpQ_^+nxQ{pylgR0Q@I_wsVqOzRA0M(Dm8nMF7~UgcE8)b zR|-*uEe{(4CHw=Imis%(Y|_u{lIU=jQrh^3m@4#0jHdEWFyZC*!JHfEvj_>$qc!MX zz0Gz(d8f$OpuFSqBg$`f=YCc-oim}$R_LzR=YC)!z$}IWR4%GGvoTfI5qf-SH)wX= zGF1b|SwH3+KuyvlFJ9z0?a!cT@cAaox76%@m@xx%03j_KI4#3$r>bc)?CT&D7ftfQ zEX2S~b?LT^g(B+Q zPfMc3BhZRe!h`MgUj6B14&wLZygtu0ib|JB-Z4Vn2F|s8J121lq9j9}ZrWK~)iOci zT~GHDE|NF-KE`^wt~UtiLN4nQO|e_k$xSkltpKAOo^LfT%;x*#*u?Ot@VHW424@hl%+bhoC~wFk&RuZMd9rUz{Zi?7 z^2T|qsE(2oQQqr!AVm7i>aFiL7#a08Ej^iQHm5s(8<>mfdo=Tb6G2>4-~Ro#gOy8| z(t~{;D+bb_CEVp)f#M-C>L&De>C%I1)un4<2AZF$J9^6J%BlN4%?$U?wgcIjYu_v0 zohb?Jv&4$ymk6uM`7@Of&mWCoWDfRSgF9+asmM`BUaNpMuU7bOCzF~9)4%nm=67{@ zmYTa-DD(EL8buple9Xihc3y{s;JeMMww51Wnm|>3J8?aurVB?Wc=8lSxR6x{!v7gg zVK7HDu(~EML1y}y;k&u@B|A%wX#Zz3hd)?wyz&oK-O4?bsN-_p?biB=xt``Sjq2)H z?Xpas;&&$qD#9#h%oMfc9=6l#6!XZR7Si(>G4Gk{nR*bqy}6t-y*Ta2(Yi$wMSj&s zoc82(4N$qz*IPO-J`EX$sq2C}hdvI~F!hpBXr#;L9=pl~!i=6Pk9y3)r=S%=@$t8o zjq}SMo->37^-ZAwLszp?rPSK#eGhGWJ$XCDUiw4N`o7pL1%pxa)ovOk+VP5(^pXVZ z?0D~%1fkjC8qW@sZk(^ZKh%0NQy~P5ZZ&r%B@~pD7qeeBZ01Hvj|^=csa+jbsS?L5 zPuYxla@KS$)1aa^f?4w7t2goU>i4xtXO_DORryHEFH6IOsvM->==w*!C!4H~$xs)3 zGxwO}4(b(J%LukQe-C|0SlIeg%JWb@ILcX=<=)hX&3dKO6JGPK<2xe$KpiNY0$0rX z?(>P3=(St#lrmOX9{vRytCYfTMYfnr759gRZWcZr*06HF@bQbAMETQdax7L}ZU7h} zRlmM@cg*wLD5VS%AaFtFMqPWI3HGH&7o4u%3kxYdy54F4MntG8Mh1MD0Ht;8b%B_T z&<>6!H{^{i#qM0gZo5s!(xV$FKF>q*dX6KdgJ+wdw$N=q|HoIiqB1wcw1-z!3g=^O zg2zh7KSAXRG25>%Y*U|f-Ezy^Skd}$QxR$m-FEkvg0@A}x27)6Cd-WCM+j9E&#rCp zbzq{W`lw58uSPCSB|f)bBR9TA>J`v`%7>hNy!Jg37Y2&nv&g2q2+~&gC({do#*zWb zE5Py<3d@T4JzIAQB;jP^Xg->Yyb-IFdG@zdJGpS5dArs zj`EK#SA?l%?|?4w9C&hl+z!ZI>>XzVPyp{dEbk!kmh{odHt zjg9Wn4awDaMPzAqOZ3L3RHqnW1oy3pqPyM^?uagMqlBQ@NgsPGqpuT|Xq*WWotv(0 zZi?$~ z>0)FgFDWGS2lmR9oZZJNF-f7zOQ8KqgDUn)ru$@FU8o9@wP3y$$r}6xZ#3Ma7a}Hh z8fpj(xak32>MXPc2VaPl5V9MA>u=a%uNZ}5ul)7_v{{*}t0NMStgPjr@6O<0CMhAI zH$NUrz6RU6yzm`+B`I`&#Qxu!YNrbIEYh)8dQ-@O0e)}6O5Q=`736T0oQgKSm7NIscattFf~N zw>cRN_tt_LDRD9~+8Y!QV1~T{U74+`^ZW7q-3eRc7|Z24U#9T|xq-RrACoG+GEw_p4M7GB>?7$zX9_NV+ZPlELhqd>ItSypx4Manu_sk~5fdo4?Ak zK3G6wknT1u&N3AvNncaQp}}!uPJU0A@0|1f6p4V%j1#`^LfXVnLPdspb8ge0PZkXp zaO!~uWhX&{>1xp6NSMPHe3lU^)caj_yM=&L?j3qs!Uz4?J9M;!P*5D8B{=HOu5f%S zsfJ>lJC?@^=4M(MCFz@Z0!B7#mnBLYM((y<^xd9~+TJ3^LLHt%eVz*tuXJ?? z{0}wZGMgJvrA*1@;(ByPp!yB|6+`^V%ACG4#8@evA+s&Lt@&hVHF^F%S7ediOHbnV z0jZ=s7n2t2ZfLgFu?e>kF->xx&f{)e?~|KeQDu%TUC-mltxjDSD}CS})g!YtqbIpp zVfr$2MW_9y=h}+QDUG?AW)AI{3(D)~{(O(BO$?ZaW)}8uw_NY@ooAV?4By(Asay$7 znzPiy#}EX0ZkH~ql`NLrMiVif6M+t->-UEYIjBpmT`LaH1a%d(Qe@mqwmm5%sZX!! zDaOkuSLXyv*BqDKj*|sT8#jgwSKHIvt8lH0U!7BUha7B>%a!X}o?Ba`TRmM((E19r zIugXVy@i85;HFx)_n&-Qa4Q6oK>a2tAS#GL2?+`F(D5tdjN}a*j8AjRO9%-Fi3t9D z7xG1dpIRt{R`MHk8ZF9i>xgqOwlX$xI4vkF%x{Equrju{H|ICCGjKBIH!yT`Fy=Qj zw=;CKHnB2x;dd~%GBW13HZZiqJ>b7@XAJ6qCI*Iv#t$6$jm(YhjP1?s`S~8;9E^-j z`0dOenDQGM8<|^K8T=BOIX*BouyeGwGC1z&z>hPj|So|6Na9NMyI(;lC*sM+u08NL`cG6PLdvcv=3ks5tua`+pkf)}ds`JXXj&WhSB1m7%UAnaLC~9y2@1P&jhUG9+;_?{%dpVcJKSXzki=yYwfkydY)&mv!3VquJ77mqDcB8NuD*62<5jJk&sA6 z`pR-b^+}&|zx78XO|4N|nqFq$PTKkXg7d{xyE_hyMJSR}405Nx6o=7nWtn7}R9Ags zwwY(veMj5Ymy?s_r~0yrO>^>W%Q^pA1ciMy#E2vAtV8m<;osW^)|@IoDs1HM^bqc> z?`_St+3(HH3#Jgqhlx`Vaeg3$xNhA`Y@5}Ch#U402XTn?D6wvlI6&OP4HJhk(D#Po z*#QL~2Aps8?o?Y!;EEfO7`fM#g7eig8gVI?{aU)Vzg@9hxhhqPdD=zv#jg%A2yXbHuxacT!SQE|8S8`!Ho{VJ8IsB0>ygc z`ElPx^aD$m@2KtgmJeb+nh7sAtciq$HJFaop{is9X&+873Cju;+dK^hmCv!W2^^+U z$d+02QcUojR?23=xf9J&rUM0EUqPTUhUcsuGOIuoS9nEDM3v9J-Aj!rztdXYFgcg( z7XCKza8a=gmWM57l~`c-K#DD9c_;<~Pmr-T~H$RcHXm7I-A3lNkW|$&R226pr$rLhcR-Ct* zV{ty)wh#8?V^)~O!_D^$i^J!3=Y8o}2NZf3t6n_lSy>>jh}|`;GZ(lmRuMk2YCMwG z+v#A?G^JeEaEtH0q<+u}iIWSzuXuXeypohIlHq;CW0!O6%+|+|XFB1k3F;m<)9)vc z;dX9%Nt{ROA_5&=u|rJsbZD{1oaCd|l%$=+n5-Gl8M7s376_sstL&y@IvB zpJ}`y`6o4ca$urN5=k1k|2C2$LQLS@VLdM2tRzJU@~ny1Pc=I^Fvt(Gq^mi)@QLT7 zZc8>VyOhOl^>*0sU=c%Aw@5XE^mv9^6YX_$wq&Dn#nO3|Mpd=SI-_zC0leyXP?dJX zsq0L7M`BOYT$yRvl+Z6-?R1#1DtTL2;v}Xo#gNO#pUXt9gAUBrni}nnmRCpz}P(?|^Oik8w0h_pXJL*{I#0VL8y=AEWi1R|vj;#Y5%h)}~~0^dic!qUMVA|P9I z3J)Yg|3e}cNYvVw2sz`bbOg7xSJ$x%0xTO_#3fx#NBh>n*i|`|N+QwIs9lTc>*m{x zK-L(&It%xlSmgchCQ{b|~3WP77xQPqP;tC(vm3P3>A%s+9_| zG{3>z8K$}$%CnpPq_>lm|KrC4l|Szj2RoJ2K85|QhO3xxTFWWzr=FHimGV9my+8HI zrw^}F2fZm_t(MhA∋H%qo(8Tw^;j?)l7r!tPe;GE0HHN4jzjr^sQGW4sKHZk6`T zA5}`UZC3CXSnRl%vPVPk^&*~-e(y=+@|ZjnofkNL9I^>U=Xu0WAD3a!dF7eYzU$h~ zDm}!SGd9?)ZKZqV?t;6Ii=a-b*Kc^kJ>fD5>IIpcn%%r4>ZgBTajzSo|aNBXyX-rA&5j20D51$C~6O8cGSyVlCDu zkb2@3z1z^czrFHbUT24SlZJZ5&h+~=IFtlBfT#jjZzBWWN4#Tu}RX0CamOGe5}?k^X8tTF*LvZmJ~G535Uie)ZiBeA)l0k`JX?jbSzV)X&H z^dkD|)X)tV9J$KWE()dbSrN&8)4API!^rtXPpX}|b@`OI2$?7Io+mAg2#A$fofvMxJS z$ch`0oC!@geK{5xK}rkFyqHn#A#+LRF`z%8lh7HR;bP9MZ>;vqpkn1)-Ta2EB>|7pwvV%+YlMV)Z$%Nq#)1eYf zmG-7$9~seKC4{%e$kUe9TdUkp1XsC{U)Ga7raVwi`hqw4@XJ|)Bkk0hBprlYlIbUy zhtnQChYCMvJ+$&u2=v*r@lw`R!F+(&LJZ#cK;)lYP5<6^gQ+;H>lBrx0D}RObLBlFTiRoNJ#@hm~t!aC@#6p^_ z1EaI86YA^5K9Ai^uqcU|G?s!)t&WRk@A(9qm|PW!i zBQmVfqEWb=B{fEnEQzP`YDqWW;C)(Ja$Z{TlOYKKDyyyUU;5X-3@q^(kv1glx#%eL zfVKT}@hPid#$$+3#$#e1!VWWx#T+8{xc)HB(NRrHx?kg@e)b1NNstrIL9c%1$8!}4 zCxe8zdNL=)qV*oPz1do`PJg^(iM**q1GKG=~m0dw4ZGsU5v2!0AL|p?t*xNQD0d zS2Y<3C^6^d3T4}3H|t~^?|jX!`A4i6?W$1x?jq*Rz!3XYZRoVlS48MZ8xTV8lYTT8 zKQf+`;6L#)!Ov0sb4|%B*O&6pOLhZ9&9gwEd&YD#uOp__v4`zR10~2u)s@`6w3CiL zmkT(<^j!OWzJGnbpGj9u9^N?L+PGIt$TseG8eCG-nHh2P1r$!k+)fS| z$uH-VMSc4hviOV#a*hlHz6XNl_r1P+=3+1LD@yyb8q!2twCkpBcgr*?j$S<;zLX;9 zq5YbFY`0F~kd|>rkuI4bGdVW!efHmBqem2V+ei;bXx7v5WeVJ#>|Pzmo|4F`9>LDE zIL|UYl;hV5)p~OzkG}S|-cT#5Jj$C$V@{5+8jGWr#-cAjgE9z|=X(PZs6^#jkudN;{{g!$Y^##_iCsuN_FM$MBz;o>{R0g z08rn~ZA~EuhD~e{`f66n>+%~+_`S+mv?S2QD@5|7k1J9_E>(tsM?3{nPAl0vJ6b@~ zA~9{~v~bg|_U%8af)ECnvA!E+f11r?9bmYBE*g0KjH-rbrolPp28`lB(EcSnBgl0u z)RH#F;RH6J6V&f> zSKL?iPw0G+;gF8(?}$zX*@1=w$Y!xd{uA55B2yhemH@gvUquR@ra#YZ@Cir+x#Pd2 zk?3Cq3MFW)j*sy9CN%h;FcutOSvL{iR(Uk>Fy?v$nnEkHo4gXI>SNjZfz>9{m5Et( z%{`L!42SMkP7Dlc_w(w3-bpnk_!TW%mq(ittM~I$hl9DbBfy_vQc6Wv9J<}mtfWTn zXrs!eg8mt8Ezj;KI0dTX;i}~rcwa=aWaI>;ER>Q<(B+4--!Bu{5-$nQVr$~Kj?PL> z5sa#Tfy9J(Nw_~(>oUG6sT)m3a|pf#lfD0|h)m&O94zgO-s@31{@udy@@zjxJFhxr$4NWN>FBaj9Pks!co8^CEe<$}7jTrfZopCcVMi&H;FKHy zj#A_X9AyyplM)HfRBI!5@AcO9eF2j`se~(CRj2>90e4naP;CBW2k+;2fy(wM{Pa(& zoinZrikVCQhuL_@OR35+>{FVpi1Z8s(x&+!~%HbNW*eH z17CH#=;J_0Gmw%8)Fx9p67ClrKZu>cn`TC>2=}jIm_5by9`7TsC9vc`*KpjEQsm3U zp*q)|VMPi;JFmEpo2aN6={shNtJu00Nrr;}AMYq8V#W=^yDh73UeQEv(Tw&fRAeSK zGs(lq25J@#r;IZ-C@JK4Sz?-_L2IXzL2E~ez?7AKtaJBu{n&y+{n$t{8bvlQEH8=c zBDu;2?uN@EbJ#*<>NJk==|W|uVSA;USlLFT6^#*8oEsA7pIjCkkIxK=qpze89k;so z@y7o3;*_N_BTMUX4NyLJ34I`gWG|pkIsCHE@JM?Iz*;s~603U5 zL#uZ^hg`tqbupgtY8#_+Jp3m{qu$QanIOdTsx!%Te~^-T1090l-XVbZvC z&caw9B`PFPCUqybl%6hj5E055LZ9sB;-S?RmR5{Tbt>!=z8;QVyZ+FOeDsczQov*DhJccGmJg1<2P8z6wq%On1K4TxpWR9k)i#ni-`QscR7tfHA z_*z>^2~kM~2~n&2IFbjBLV6$gDbOGJDFVl6$&?&<+HQEx(<$lLHKVdX5beI<-W^3M z6S$;}^$daY$bD68!uxC%ex$#g|I~E1E#(1>uxaa++jjt^Q7{DTLc9WdI$Ybn%k_Vq zZ7)iLSIB}VM3!e)bo>Cd42iZrlbWW#-oQi%O*f+;NVoWBwkEd(W5#{X3gx#hi3cdh3YL5a}kxbx9$9l&NdCPLkXRLO!D64^v zJUeb%#uGo6=U3{p^e9NtKl=%Qhe`k*rut8qfjJiy6B3|L=;zC&lfENBSCx~MCZYSP zT@R^^e9o-R%sd-=2%h6Wha4MGc*?7@vi!%BvnVFlE;`s6_qyc40Fi<2lDZx3C1$=$ zMz_0{ZzBwzFJ*)zXQ7Kt9y#rT+i-zXJJ?4%8~}v-^kZ*q4whqI#5Sx^c^NEO-IYCHal%}O-&n# zl@q75+qi`)d;@lm$T}ESu2C2d+_C_uu-Df>8_e!4Ie;n+>X-l;gpMFN$pSgZ0^hsp zXa_OHWl>0d|Eg^OAfvVpfQ;SUEsd!A76=s!F?0BGG5^R!_mrsuOv*fXfg-Og>HbUN zr(riI4}iV?K#jasHwrhlv~YCY+CX4ql=ng&oB`{NHf&vNlc;*_yV7{KBbAeC(7|GUNtB5-t-+W6Rf@5D`slM*vs{XC{*d;n9AL($aX~B2-x>S>{1b?k z`h2wk05U^@xO&S+Cv@;$A55bT7AFbr-}jO!zVhe=B&OceNijvHW}dO4bZ*8vCp2Ai z;|eWA+3b4Q3#2{2N|5#r%d9s3HrHbIyvA^YR?Qgebv>yD1R!G&06+^DXxY|Es*yLE z!ng%aV0=dNb@*XaC}o<-K zrjD0{t2(;K6w}Wk>OB-_2k@b*zFvu#;#BV}CZDK9HqoSr+;6h)XMQqxKeieN$mI{` zKaco(s;1(U@Y-0VpJ_^(!4Lwt2>55mlIOQZ_xS{a<5&x3ca&HKcm(zW29b;=19rmj z&&$tWG=xy~jHV*bR{T8)<=(CL>HtuQQ|Rt#M`3Z=v1UuNHr`0=UXSaFWX#shF`2i;QC8J{`F#R;e*@;&JBXqOPiPp9cjk zOrj(Zt1s!6qvcKr%g*NSdqIr7Mk~CYOQ=o-&uHsT1x`tP3Z-4Af`LP)!gemG{Al<` zX?@0nFgS3|iSP|TvOxgJR`?IgZ7|d(dp`|M_6oVIDX*g3R;luWD~02wT^~n#q$XvD zvpwbXCrt1(R9bxee~!bcZ(+SoUfAuaOxM`5WRb$Bc-Te~XK{~y-$U7FMiS|3f8PtW z0xiVHpasf`Bm%jv?U8NUSMa46A9l|bfDIRbF`E^FIo1rn&(}@`Pwy*St-9g4U~Lqf z40*S@fWB%-P5B97&^LVg)0A3lvR9MK-&QTEZ#8sFi`G%<%<}Q~OfZGbIstBMA9JKn z9N-RsIV`QEh$T;AT+UC4+=e0a{w4qrNX(+V*@~4a40Cb1x|K8ho!PG@uU_qyPjI&0 zYBDH*gyUhLVa5$Wj>x#xx0)N^>U9;fwehqXQ1f${99uYENwJ6)qIN^lJuT-%)DKaj zKMqj_f_5cGx5sIYG&j23Dw>`|EV@U}-a6RBfIOs{HEi^5Xq6auWcOkm$Q%CXo<=4@ zOW8dN&kU4^s3)@p05%!3;t+=sNF|}Ejeg()8Ad(axn0!xPpRp6yH9XTJV-epUkt4g zpB*Vs`Bu}>=$Fv|o;SeHtzc+F4aYk;x&mYeJ4^XwVy8`mZ0n46uGW4+{JF1!%}ZVC z7kxuz=yk!qGuto-f)UieSG~5DzO&dGB3J6JXV{e5eFmMg2Fx@%XPLvYLtnB~H{o7b7D7>jf2cRyB^fr{>7IJYBI@N>u-OY) zJ?I&X=U;Q28#WDc(tPHERXL^%q~~m73d7q&!enyMRDh2h65gNSTWXDJI;bG;czzC` z70xU`3;>ujL4dVbh&xhA;cN_Dyzti#?xyY5=-0>Lv{W{O{_)%nr05LvBRHAB85<4@ ze_4|I(~>mUlKt!mTpoVq%Jo}rg!@^{Fw9VAKK3Kzz_)F;+>$nwD)zn{JDTh}^S)zi z8#?J*KXlrA=5WHt`OK3epyKW|bC`cA^W>awF-Ab>k*0yMGHBua{^i)!!XFI@G^}p& zUv!sCggK6V2sZUVSC{5nxupijAe$vU0QDX!{Ed{C#+I4Gn% z`~K|6*j3Q5&E`aG_Mzb4b7WN3fb2^@r>09RJi68wQqFXL-GNG3!@l$r+~fxg4u=gs zy(otwiKSVyQq@;*$_pgVhtJk%zLBIbIKoWGc(T!*co0GIR~v;-1r>5Ee0 zwk|L`%Edf8Ukjrnb|ywTwD@Cna+o>m6D<(4U2mCsY*u-~76Rx9z*2xo$wFXQ3NUG{ z>FTwx^3qI8jtlne9kWXo5y|n%6#*Y?DT|*0D(xCmnBFx+J^wj_MHYz|0HHd^JIaft zrsKkOo!NRa_wk-Z&{w8uei3-d$t{M*cj(=N7d!^Ndp&vQF5!w@$?7bLa`@Dd-8=D(fqSw!?*T?Bc&Rj{W4nM z3xpgfqwoz6yv#jKc`DXhj;Hq|)29-$dQ$&eQ0V+JL6vI&kf;FzosNL|C)%?DUQSiSeRz7P=Kj7J5rY)n3u_JTb1?p@#o~G zGw8L8d4Rx#!~*nTkScu##;M?!0K+}DRWu069C zuTN$J9aWxLT$9(q9y#tlIgIul#hlu;bvZ;qAg`Uh*qq>xZhAZUBJFLC)U(+yOw$Z# zcpCr?M!-%$Jbj)j`aFmbF@%0C-OlVxsBWbDi^E@mdp{JfshRhAWH*#+rj*RLUDdB1 z!&AYqU|DBqw)V{`o@&DcO+T@QrwSeE;-c$FmiG(ZBK)TO3Zn^PMGA-&TSqy+6Mh4+ z;;)7Tbahn9NHP99;4T-&&3|tT01V}wVrC9G{zJ_?kZCDJtTkJ4cH!B2XCw-=D zDDwfkDZ8$EY5t_S6w-EzcY~mR9LBNcbjbd*O#oNRuiD_wnr>Pf^~zs6U_2eGA4P~5 z09(yGwrq{17EjhK?K7T0K7aV`-Kgiho<>dE5-v$iQ#I`k@8TR?_)|`+$IjXf>oN`5 z-!6chZ^il?4@qh-vo*4b#0+SZ^?u6hj!uq(Po54Ap?<$>uzwB($VNab;(HJ}7F#?& z3C{yU9k7DPI_);wTMTmvQjCT}x7y*;E&wE5)fEEl3MgRI3J%3$Xu*!Jrz*aHkcD{! zXXPx22kfQ%|Ge$bTyQfRP(}xtq$fG^?b7{>6qx=#$msx3Dw(w3pqq>LRmEC_EcJ_P z(;9vzX#IVnwojdc1ceS%1U)M(N<^sC|2{1PFSE~;0x}WUx|)Dj)#QN>0T>SKKlkQW zHj7{r$Sdj*sliPLbrO6lS#F(aHoGWZoI1;&{5}(Zd!=Pv%C*^tTl+<7^R3*obe?*u zVb;?-+%&OiN}0xOU+BS!`(nmFdE)~$8eiyL%v+oOHVfdiA7)jdl1Q)%2$>&nA4O_g z&JP8`zy|EkJ59F(P*>No+Vb{^QY(ZfwitD50~ni;}Ju!zuJqj@eJ(yUWq_V*|gm zP7JNBZz;Bd{lY9)BEVU;43&at7x# z0b2kaSed$@=`MX`C}pOLvYkp}`wAfxu&oEu0n@4}37A$<4ayFx2b9zFQ$DE`uL=c~ zf7+S25zu)s2ym~Ol7M^tX(H&RSu%fen!b9&<*5FY6BXyUbSyaQZtf^)HMcHpHNV20 z>nz34QwlJ@eoV})KKkQwL#0Jor>GG8VafhCCfVrU;=5nyV;H_f0s0eO6lUueDieN- zfq!gQ^_EB@UK)r>B?YIJr<{VjLBN+!TEFzNFA83$%$VEfAORIA!!Wd+oEck@NGY~%>ZAd7qS0p#s0c;?AK&m)uD9g8vMW#l+Y5LV8g$8zcDJNNQt;6q zv&rC<(~9H+H7tE@OJbxdA0xY;GmWCYpR-tQjTD2<2p_-B2vZoZ1NtM4By#*jV*131 z$Uwd;&~Y{Mhcd?z^~^`*XihKk0=mrSSm0RIVI@Z+ztj?JO-BiKL{H~(cTq0FN>8hQ z;q}8TyGoV*uW&8`2l`SE^+U|LQ_rXR2P{cMm$BqtIguoP)oAzZmr?K^4nj-!} zS&hESKKy9^O(3y~s@%gbb+jmJe#R;|jlr?XMGTzqKg{AapltuI?sJi?OoWEyXCys1 z+24<(-SH=07hs9RD4(Or$#0=TBJq(?hxSyIj~MGfl#hc(ADO+@y0xFydLN`>r?pOB z%d;J7^{?_;L2a79p{0LXEnq|a02^uqcv=O((_)Jc6}HjWEO?#>Uhp8l>@0Uod9{M{ zMKS2+2D;q^-K4Q3(>dUXJ+bGI*jKH~Vy!lZwr2Pb`8bp2y;;bZ0~~FX58!B<tb>#)tRm2OMs~ism+h?JrA5E2Dsa3wP+~N={jd#<;dd9FF>^*JT294KXIkZo_LqX~`DmJ9HsI@Dua?>V*_OTrywPUakUDwEWh0NSheG zsqqEcv8kdeR<~uH{XFV(kwRxV+I=ZTfgws^U7>Mg)MzgO`@ylAN{Yw(wfzS-CSUH0 z&K-VI|xFFUzd+unW` z9uD3Lht?#<3ZqUgmDlNdbFx%SK;fhl;k&0q+`sRO;;G2|w}MH=<*IO!>u1ic1$@V$ zzel{@N!?RC161^Qvuk)IC7#1|Z;Z;EQJ`KMB6dw|hHOt^_>Jth4ZcqFspY#&t)|4h z)2BF8|3|t_yfpG@x+?3&N;&%tKJBt$<7%I5=!|pzvLg?t`KSEe?-f4uIbmk`%ls1I zUgLLccYG@c`x&Q9$G6i4`t_zvC!(t71My};(Q^5_+l2}O+qDHdtIyXJrW)jRcGi1~ zs}|7%zAd|s!-UbYW^ei!465N2*2`#PSmGNBa|*j1x4t-9v9L3W+DOBjdEeb!Uyg2o z9QRbr4Lvr$H+A_ybVi1o#N}xFI&%-%Hx<6#6X^uKm8S4;Y=z;*aC~8Oahvnj$1T@k z#P}T+^+wG@XJ@&L6N?2y)_T@66R;I=_miK~;X@#AU zP~9iNCyp|A2eDk$=Rdw<%4qYs9y*Y3VW8=IU2VY6sw!nCrZ?q-M`aQySvyNxK(~y6 zbsEv#hxSi4r5D2=eZxEqS<@0a74CHQD9}5s(YSCOOs+S&mhsyiTgY36#;Z_M_$=eF z7yZ0eOZ~?Um!lNbq#c7QWZ=BZ)6`m?5W9hC;$7_YOqyW(soQ-kVkd% z(qQq23TC>=wQt2AWZSkEPz!pdI0~L-rxiU@Xnk1{7#LUV-ysbQEScAm{1`cWX1fl9 zt@~vEWWY5P(#usC(R-v|_zlCh7bCpq8X>zECA{Yn5pa!o-i$cG;X!0KBaVWC*^D>@ z3VJhQA1G+dh~1|AQJvv*tKqbtR_*Cl?P+UMeBXa>0%CWI!Z{>2`16NDWRu@W#LU;^ zCG$(h%)b`a1hcb#X>-lCKHs+C`2;ASVLGAYHyc7bsLGDuup7F*ynuWa3LPu_^&lpG%Sz#-8i-r-XotWu9w<&vj-yn}pf z?y|%#`>r$wN4GK4m#$=Ta_Y#V?tN<-UTBnj(}lbJ8hI7>pdB3=3@Yox{SgtAvLYfc znd#|kIXOB1x*aQLrcd1(jELBt{5k#<1|6!uIlxRG@zwL*w_TsetxIgs5IXdR&rD;Z z!RnWY2;LNF5vK_)?ipUM&Pl-0$5}{_EDe zm~(uIJ3Ls{?cRr`DOkZzIreV%n6R_HnfWaA9x`S!A{;M4#Y^)aa-d%Yao$P++J?^kr~E=Dy?1m9 zk=N?Ss5MP3-_S{rWBw3!e?#q$xdI9*e~bty&iF&sN+!!*E7$0@$4ZZwfZ>vKv&k}u zk7dJo@`+=cjC`lYV}moLLwDcM2IWV_N}eUDj>Jlyg@*Ik`0zPv#-A)Ml9^d%cN@>y zX3`-HZE8m{sS$>}pg4^%oUIvQRp|8z-;-U;n@8f87rQ8A5N$l66T8jp($$|QZhzqq z+FQ!rBle^~l|BvCyEA*lEjyWeyEEZ?TdPJy!YX!ec^)%7^}%&#f-vP9X;?2b?{;mG zOB3pmOxTO9-oYCYGk5Z-J`cHBR4p1E4#)E+pLS*35ZrMbT4hwwfa2SBY3n`q>Suiw zRF*f(q3yMzJ1F8#kDX(|cEjFodY2on&ZTlO3mmpr4-Zw34Nq+>_1%_Fk;$_+@5M&U z(GT0>MyrGLHpku#2KiLuz4=qx)O(BnfGi&QcH6{H*{I#v9kD5G>UnE#UQxAuS9<-l z2aTQgT-&wYs%$QAojnVgp|aheHn&aeu4uT^s`Iw!HtXhRGvBr0K6`uph8?%bVc8f# zVpH*V>F4s>-=obF6?5x+ujW-JSMOn=jrG-G;;xbJ=6LnqfYE&O7GaO1eO)c-M5MH| z$T=ZlaUn?|NpUfr6N2jY_moXs%-Oh=rA359#D#x6i+m<6NFp3rBRyAeS=Rg{_@7Q0 zNG{$VPsRC##JPloxc)Bz-;sB@CE2)dncrbkvvxISgT?c(**KWESUH=Uu(_H$Ia}L5 zV7nwBCLkQ?c}?Yrkhpl{`)ksWh!C3){D(u$YvT=)c>amj!8lN6Sa*l#=ai(*L;5sApBzsnmb3yUgANnE)kETJrNMN~*a;) v6%lbEF-a*Ac{bVqZx;hLdHEC0E+$Sc?oQ?wCxj)%BqdLrKd+*B^~C=JP%;|x diff --git a/Designing-Simulations-in-R_files/figure-latex/ttest_result_figure-1.pdf b/Designing-Simulations-in-R_files/figure-latex/ttest_result_figure-1.pdf index 76e860d5a450023bcf93d549b40b16d2c41fd94e..316bcaac6a1e1a54e31ae2c3356616a2c985d266 100644 GIT binary patch delta 2064 zcmaixdpy+n8pj)D%XE&Al!|7%-5Q1|WJl@9C{5akT+-6(y!M<^f1K}M-|zFhKhN{|K2M2tCy;A~ zz~`A0Hu>uvkU#64y+;g)?ouO1wN?c@)_#%4qiT~-IPyM8Hx;3UBZ|>lJK^@=F79@+ z-mTf7*9Q9`%>9JM1D;yzJWso7@qbFtyL+WI)tygpHW;0l3O*7Ju_?v~1mD-kv6jX` z*ACRa@C$i+_`Ph=9In~m2*cH~wDw3wX3?cI&HJ#|KaT4|k&e{`GgA#zrN%eO@mYqh zTU=%35rr)$iT|zwQTt*uECkM3S6`Oel#Qt$wC8NeqP>^gj4R7L)v`s{*s5sH*X%HQ zSezxNY0jsgA73+U6@=rsD4MTNB?TeWddXDnHDtt@AA+VJj7!l-xIDn5FFgAXbKhD~ zgu!vUIdPJ8%$u~Bjpas%8_Kj&D(vpwOJknkx*U}k;rIo90d$rnDR&CSL zVtzX+3{5cHw?5#`L=YlXyg%A$!Dh`0S3pF53l;s%fcel4`-8S#Q0Q&Lea!qw)z{sD zSsm-k9jN=SRvS&Bb2eW)$?0oC8!eo^-nD07w2qgu)2m)lui^JYY<~roe9zM4O}-QL zmTr}S1zqNPu?SA=%BiY|i|_BFOPJ7w^W&U0pR3~?Gaqonfy|B zyYTLpbRiNM@~fNIp7DhBjo%AI3Dx6aHffK&c64D$HX$RUz49*}c1)Uk$hKr`xKsFk z$XcCSr#}~)U_3MBS-R*XbStt8+*>I*aYkkyN^PFE3tSB|-P9u2Ca}Cg;7o znv5k}nM&ObjK~uqwEUF8X#POAS!w5>r?TJ#ekRFG`I5pEU>L)cYtYEDG^EHw&sCC~ zfr@@5Qe}GngFPEJuzo(1DhAxe>ZEP81Xi8h{H__EOUlW{pxZ;J=;C>=^b&FynJv< zK2gv=Hy~oGkzp#dm6@`|rv9Q4eLTXoH;g)VqwJpj8){fvK0WWpt=r53b~%KYIMMt5#UgDt1lmBB%GNjg&I?v;8#E2ZQ>MeZaEV_T7SpSgrzov9CKVlH~RMI zM1^=$e~W27Mi4#5d(}hxv1CXc_J9B}d6F9Ai zX}RQapt9&ms_^^~o^EER?a|n4$JCW1(KF=E6}Qa~j;oEPy&VPkFqX_}N(BRBrgK)3 z<9Yd$RgsQ4t!RD*pEav*uG~4<`_4G;Rsgkk-i~yjD|tx&`B%>!oS&H=rE)4e6$Bj# zx7L2_Y9MYeNU>SQo8f>!~ z&rVz(l_Z^GYFunEI{@>tEuONWtH<*_*+VnUTmC?Np-i!qx-+C ztcs9xIU`xw_^$VL+`nhN+}w0d6;|kQh&COYX#e=^P1X*}390HV4NjY`oS+`*=-0Pj z_Y|G|Zn~{7Q2Ca2@wlRkGn#aLo0uJOZLp z2R~S*G_;kxwQ1r@J$9{4-nOyn_5op8Sx>K3)YbXEiUxj9$b*WJ@W`kZO=YBS-}8W% zsiKr^O1{m{N>NQQ{u>;*^SI;TBCwk)K5_|jKPp1Y?D0++UWi?ccJhiFOWJE4D z6kl5Kg}07X*~_{IN;;lkx~nd=E?tnuznys*E&U*<_fOjZb9@_KuxCZmxUy=1SwlM> zhntOj^NXLqq@n1_-*YvZ)k6yj$-kxydhEX<=O(J!DtN^)Pg^KoEKxX1(z>{6|9L>? z@jts}Ginxwk6jY272Y`PIXig!%BfS@i>0Qo9&F9-SX!KpZn*)^M?+=9kL^_7*bhA8 z4Xx{DnP0FTMz zf>8j2zETj1rZYgPtq!;zK@^u5?$Y^>X@%j||9VXuhU_Myb_amLsIOTf5Vib6yd^3s znl3m2fpnAz=L#@5=aO!fFD&+Rv{Aw z6u5xU%MKzY1eC3oMQOkV*^$kH5LxOqBjUov_D*N6&a~&RGtcvT&-=_fXXfoOFGgn> zsStqwmB>Z-I{r19%?2eG7 zmp9RTmm9?JQ(CZ&e=6OgrM<;mt(CFaA*pIn>~6ku4~>p90rh86eCNTR?@AATSr?F; z_~6D4!!U#O=60J7YNcP^*>)GhK!v}HKiPz=suSU%PCj^~tQS$z?o#aE%KRupe7X3L zkIhuMy6|R@AsZ#?`0478$9H@S`=tsM2G>25vNQOyw$=BfT9R{uFVwckA>fV;<*qSc&2DOSnbl6Z1oEkMcbo}yJ8#Rj)Z(R*c>h0r( z?kiE`?nB79o;9c97jK;P`#UP5!^4QOTAaNNxCO#nemwHy~(()*xXv@-CG7HC6T$`(u&rL z7tS)GrF5F&m=PCPa+NAs9;#Uk1&?m7KyqZmiUbWssl=w|aq6cW zg!DjtWKa(k2v6tkrQg-4{7_98&1+?vDVo^(_K#J|5J&gh&Icm+E5xyM2YP&C%q2%@ zl@zHp?(25)ZGXvu=ZOyk4t>FLNUt=|q(0k~c`>8cLh!m^OnoFcQrNSn!&_J5Dew2A zy@Ir~rYb2#OIZkA(lMAgops>wW5t&H1eC*Y4uAHZO^Ew@iky42S5R&&iGtNf4r;2% z>bhzglU5|!LdjaTLerGv5#@-MGNd%y%or6#N?K71%W}ZH;^`^1wr!2o^8(aAJe*hR zzn-=V^Ik3vj$5doqYom{)3r0%-l<4=Jfm+jFhb)rq!(rGSQ|HDK0Mysb>+HSID(ZZ z_*avjaH(*AQhCX^VaZBX8=yz`?psdEu8$l%)fVd9%*IU`sZw(BRgB;A!F!iB}$y+$kK8nug~zN7|Q~wK9BW z5X%Rbuk}_z8xC(4C!a9mH7IUS5vovO_v)oGs%5mr!>-pc<(us@mb$AUyPk%4Ed`73 z1yu12%2r~7*5pUQ+^a_6nEukh6vx2DPG7lqJO8)uGP2A4o5wcXS|OG*HNItfSo~^y zE*X7%o1Y_DTeGs|&UtI|>ZC<<*$n60a)rG@%T4~grmft9;CsAN;uZ?4ZM*Iqwow^^-UVL95XoSWg%E$uDAi=9SUo@tSLxLCrNCwGR z&=hWdj1_bwfXe_(j#ywpECKVUC&Fz}D2EXM@EFmYFcuRaqVZ^qNLLeQiUYup1uy~l z?+r)*N!U*{5QC+N_?k!vgCl+FVL=S|nT8>JHUo>rlRx9IzW82y-BggX^OSE%;nhLPlfo^23Bw*~RUh1pTUCf( zLQA>tzV3~`y>u#_GoI8ura4O|p3Q4_M{Krq<6A!FRTq2QUHwf;uKA6D$BUJw*xdc$ zO@FL8{%mn_ZXWvf(6bXKgk4Ap#R=Xx!Tajo@7MX$@6yM~RPLU1W9jah2A$40Jr@kT zgVvncKGW4XXC~S~en(8Lc%Ys#MhaiPCc8#8{&Q8P-hWGQ^)Dq zy#K8%G@fIwkc_53y)(L)>gt#u@%9+UDYkePx)^=#*m;Y@G><{M*-&|U@2Fm1?eA>3 zHm28Q^pm6ZeW`9Vj*c1AVkqML;?3u_J)$v$Biex)UFDg+;yINjMilL?$umB=Y#+L? zVIsqocG%Le_C=~I?PRj``&#%wy5D=d>e|>wt<_b>{`lH}j9zWbo_2g?9U8(epw?AR z3+`tw$|U_sQ;Od1bJ+OvQe0k9bT_R}`Exc&sZdNhI+fnB$;ZutBO7i!foiNe0t*Z3 zz3FN5H3xNOwyWZHZL#IaDb>!mk3tV-dds-UhLTt9r_0=15RZ`{CdEgbw-)X6XYO!q+vzXYDO@T$}f!i4GlaSpj|`QR{&PZqv1C3ve~> zD32CD)!8GKw#PRqC{3LumsRf1p*%mI=Hwzbc8YvmR9Swji^W~r_F?z5E=$iN4P#wd zXjU_E$|cx$zQ-(QEH&8f4TXF}x?eF*w9S5Fsu~HVB!$=4PHOZE(F3+A;aT_U*v&^W z9bIfj$~I|SjP-H%PZzc>47*Fga`(4g*5|7s&v^!l()YlM{JzTu4QsASTt*Gy2OI>fbH#Ib8q% delta 1229 zcmeyPbwhK5IunzT(L^1^`Z;UEPUndl^4vWip2FEZ%YtA1<~A)4QCElcS6Gu8y(VhJ z2<)}_FK0dP-pQ#BJ%Qq%KRlV^`R2xP$MRnz58}6 zc9(y3gcHX_*QOYmpymtjT% zefRWm{(kd(^1r!CN8W4;esJFM%(LS!ljPj`FFZ1`yUncdv0R2Bi@)L0f$fY7wlS<| zI(~Ei+Z@*U3xnj{_`LFdf4nKfsL4Eyb-^=^6@p%I3|V~)A5uFf=vk=4?9usB z4VR48JbRG$z+K`_Xf?wVwyXRMTiG2}3!JWJ2-wZ6HDyD&``UxEqZEGq-4L~Sb7TB# zAzo8=233zO3|C|r4!VEH*&8YO# z*4&%A7WMPyYe(;}<~DcyxGqhT_n{T@_OiE{f?4nG1T#-xJ!4nZl9YMVtJ2cM9Oi!C z**M#iMN3p;Yw>=`9YLqAF0I&paGJ$&g;lDLmSz6Ddf+Hi$D!VXbEIRU???nSEcthD z&2uvrA3Z^n((j+EwBF}APJeuN&Hd}aI~_kh6WY#MY!sX?Td&ISYK{E0;L4RchZnuy z7+f#8BCc#(VcbH~$(GyfKFgV2%zS8PS`caCn#Z&y>1;~azZn+loK9x$S{*Bt|1V0- zHao>;I9)zu?Z!nsoRQ`hnp>Ond|yhe$o4Zme>!(V!1Y~=PSxbC+^6xXE5{iriZ5c2DZW!mRi8{sI^7oQimq^k(ms$&qV|rX{l$-O-pa-S?J@wpU^Vb6CWx zl-bNFs{gl`8Y>1jcs%R7vV(ou$15Av_eb=-UA_KraKM)$wfJ}J%NLaGn6qr1gWn~+ zRnP7SuK(!2@6X)Rt&^_wE3dn;_0EBl_o}vMMWxpL-D5m`_ve~_8_(a~v_<=&HQ&E~ z{y*L)awko$d9e7wolc$TfK?oK516(vZS=gVn{rqz;TiK1`v>dhProfMze4PtZO*~H zF_-ffHt?UB+|O#PYG7z;U}0ckXrjrb@0*|El30?epy6U=WME`!2$!4uoK=s}aIzTN zo6XbMQ@I%}CjSvq=Y&a{P1X^XLgM)gM`~CoXoRFjDEMTSr7D2T(p1PUNG!=HPEAxO zOD!tS%+FIW(=*XCoP0spmD$9?e6oayWxc7Xk%9pTDC8+{ff)vdCPtF1t!~~EnHZe1`ocut (See the vignette \"AER\" for a package overview.)", - "LazyLoad": "yes", - "Depends": [ - "R (>= 3.0.0)", - "car (>= 2.0-19)", - "lmtest", - "sandwich (>= 2.4-0)", - "survival (>= 2.37-5)", - "zoo" - ], - "Suggests": [ - "boot", - "dynlm", - "effects", - "fGarch", - "forecast", - "foreign", - "ineq", - "KernSmooth", - "lattice", - "longmemo", - "MASS", - "mlogit", - "nlme", - "nnet", - "np", - "plm", - "pscl", - "quantreg", - "rgl", - "ROCR", - "rugarch", - "sampleSelection", - "scatterplot3d", - "strucchange", - "systemfit (>= 1.1-20)", - "truncreg", - "tseries", - "urca", - "vars" - ], - "Imports": [ - "stats", - "Formula (>= 0.2-0)" - ], - "License": "GPL-2 | GPL-3", - "NeedsCompilation": "no", - "Author": "Christian Kleiber [aut] (ORCID: ), Achim Zeileis [aut, cre] (ORCID: )", - "Maintainer": "Achim Zeileis ", - "Repository": "CRAN", - "Encoding": "UTF-8" - }, "CompQuadForm": { "Package": "CompQuadForm", "Version": "1.4.4", @@ -81,7 +22,7 @@ "License": "GPL (>= 2)", "LazyLoad": "yes", "NeedsCompilation": "yes", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "DBI": { @@ -133,30 +74,6 @@ "Maintainer": "Kirill Müller ", "Repository": "CRAN" }, - "Deriv": { - "Package": "Deriv", - "Version": "4.2.0", - "Source": "Repository", - "Type": "Package", - "Title": "Symbolic Differentiation", - "Date": "2025-06-20", - "Authors@R": "c(person(given=\"Andrew\", family=\"Clausen\", role=\"aut\"), person(given=\"Serguei\", family=\"Sokol\", role=c(\"aut\", \"cre\"), email=\"sokol@insa-toulouse.fr\", comment = c(ORCID = \"0000-0002-5674-3327\")), person(given=\"Andreas\", family=\"Rappold\", role=\"ctb\", email=\"arappold@gmx.at\"))", - "Description": "R-based solution for symbolic differentiation. It admits user-defined function as well as function substitution in arguments of functions to be differentiated. Some symbolic simplification is part of the work.", - "License": "GPL (>= 3)", - "Suggests": [ - "testthat (>= 0.11.0)" - ], - "BugReports": "https://github.com/sgsokol/Deriv/issues", - "RoxygenNote": "7.3.1", - "Imports": [ - "methods" - ], - "Encoding": "UTF-8", - "NeedsCompilation": "no", - "Author": "Andrew Clausen [aut], Serguei Sokol [aut, cre] (ORCID: ), Andreas Rappold [ctb]", - "Maintainer": "Serguei Sokol ", - "Repository": "CRAN" - }, "Formula": { "Package": "Formula", "Version": "1.2-5", @@ -195,7 +112,7 @@ "NeedsCompilation": "no", "Author": "Coen Bernaards [aut, cre], Paul Gilbert [aut], Robert Jennrich [aut]", "Maintainer": "Coen Bernaards ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "HLMdiag": { @@ -258,7 +175,7 @@ "VignetteBuilder": "knitr", "NeedsCompilation": "yes", "Author": "Adam Loy [cre, aut], Jaylin Lowe [aut], Jack Moran [aut]", - "Repository": "CRAN" + "Repository": "RSPM" }, "MASS": { "Package": "MASS", @@ -298,10 +215,10 @@ }, "Matrix": { "Package": "Matrix", - "Version": "1.7-3", + "Version": "1.7-4", "Source": "Repository", "VersionNote": "do also bump src/version.h, inst/include/Matrix/version.h", - "Date": "2025-03-05", + "Date": "2025-08-27", "Priority": "recommended", "Title": "Sparse and Dense Matrix Classes and Methods", "Description": "A rich hierarchy of sparse and dense matrix classes, including general, symmetric, triangular, and diagonal matrices with numeric, logical, or pattern entries. Efficient methods for operating on such matrices, often wrapping the 'BLAS', 'LAPACK', and 'SuiteSparse' libraries.", @@ -337,7 +254,7 @@ "BuildResaveData": "no", "Encoding": "UTF-8", "NeedsCompilation": "yes", - "Author": "Douglas Bates [aut] (), Martin Maechler [aut, cre] (), Mikael Jagan [aut] (), Timothy A. Davis [ctb] (, SuiteSparse libraries, collaborators listed in dir(system.file(\"doc\", \"SuiteSparse\", package=\"Matrix\"), pattern=\"License\", full.names=TRUE, recursive=TRUE)), George Karypis [ctb] (, METIS library, Copyright: Regents of the University of Minnesota), Jason Riedy [ctb] (, GNU Octave's condest() and onenormest(), Copyright: Regents of the University of California), Jens Oehlschlägel [ctb] (initial nearPD()), R Core Team [ctb] (02zz1nj61, base R's matrix implementation)", + "Author": "Douglas Bates [aut] (ORCID: ), Martin Maechler [aut, cre] (ORCID: ), Mikael Jagan [aut] (ORCID: ), Timothy A. Davis [ctb] (ORCID: , SuiteSparse libraries, collaborators listed in dir(system.file(\"doc\", \"SuiteSparse\", package=\"Matrix\"), pattern=\"License\", full.names=TRUE, recursive=TRUE)), George Karypis [ctb] (ORCID: , METIS library, Copyright: Regents of the University of Minnesota), Jason Riedy [ctb] (ORCID: , GNU Octave's condest() and onenormest(), Copyright: Regents of the University of California), Jens Oehlschlägel [ctb] (initial nearPD()), R Core Team [ctb] (ROR: , base R's matrix implementation)", "Maintainer": "Martin Maechler ", "Repository": "CRAN" }, @@ -369,7 +286,7 @@ "NeedsCompilation": "no", "Author": "Douglas Bates [aut] (), Martin Maechler [aut, cre] ()", "Maintainer": "Martin Maechler ", - "Repository": "CRAN" + "Repository": "RSPM" }, "R6": { "Package": "R6", @@ -446,13 +363,13 @@ }, "RcppArmadillo": { "Package": "RcppArmadillo", - "Version": "14.6.0-1", + "Version": "15.0.2-2", "Source": "Repository", "Type": "Package", "Title": "'Rcpp' Integration for the 'Armadillo' Templated Linear Algebra Library", - "Date": "2025-07-02", + "Date": "2025-09-18", "Authors@R": "c(person(\"Dirk\", \"Eddelbuettel\", role = c(\"aut\", \"cre\"), email = \"edd@debian.org\", comment = c(ORCID = \"0000-0001-6419-907X\")), person(\"Romain\", \"Francois\", role = \"aut\", comment = c(ORCID = \"0000-0002-2444-4226\")), person(\"Doug\", \"Bates\", role = \"aut\", comment = c(ORCID = \"0000-0001-8316-9503\")), person(\"Binxiang\", \"Ni\", role = \"aut\"), person(\"Conrad\", \"Sanderson\", role = \"aut\", comment = c(ORCID = \"0000-0002-0049-4501\")))", - "Description": "'Armadillo' is a templated C++ linear algebra library (by Conrad Sanderson) that aims towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions. Various matrix decompositions are provided through optional integration with LAPACK and ATLAS libraries. The 'RcppArmadillo' package includes the header files from the templated 'Armadillo' library. Thus users do not need to install 'Armadillo' itself in order to use 'RcppArmadillo'. From release 7.800.0 on, 'Armadillo' is licensed under Apache License 2; previous releases were under licensed as MPL 2.0 from version 3.800.0 onwards and LGPL-3 prior to that; 'RcppArmadillo' (the 'Rcpp' bindings/bridge to Armadillo) is licensed under the GNU GPL version 2 or later, as is the rest of 'Rcpp'.", + "Description": "'Armadillo' is a templated C++ linear algebra library aiming towards a good balance between speed and ease of use. It provides high-level syntax and functionality deliberately similar to Matlab. It is useful for algorithm development directly in C++, or quick conversion of research code into production environments. It provides efficient classes for vectors, matrices and cubes where dense and sparse matrices are supported. Integer, floating point and complex numbers are supported. A sophisticated expression evaluator (based on template meta-programming) automatically combines several operations to increase speed and efficiency. Dynamic evaluation automatically chooses optimal code paths based on detected matrix structures. Matrix decompositions are provided through integration with LAPACK, or one of its high performance drop-in replacements (such as 'MKL' or 'OpenBLAS'). It can automatically use 'OpenMP' multi-threading (parallelisation) to speed up computationally expensive operations. . The 'RcppArmadillo' package includes the header files from the 'Armadillo' library; users do not need to install 'Armadillo' itself in order to use 'RcppArmadillo'. Starting from release 15.0.0, the minimum compilation standard is C++14 so 'Armadillo' version 14.6.3 is included as a fallback when an R package forces the C++11 standard. Package authors should set a '#define' to select the 'current' version, or select the 'legacy' version (also chosen as default) if they must. See 'GitHub issue #475' for details. . Since release 7.800.0, 'Armadillo' is licensed under Apache License 2; previous releases were under licensed as MPL 2.0 from version 3.800.0 onwards and LGPL-3 prior to that; 'RcppArmadillo' (the 'Rcpp' bindings/bridge to Armadillo) is licensed under the GNU GPL version 2 or later, as is the rest of 'Rcpp'.", "License": "GPL (>= 2)", "LazyLoad": "yes", "Depends": [ @@ -480,7 +397,7 @@ "NeedsCompilation": "yes", "Author": "Dirk Eddelbuettel [aut, cre] (ORCID: ), Romain Francois [aut] (ORCID: ), Doug Bates [aut] (ORCID: ), Binxiang Ni [aut], Conrad Sanderson [aut] (ORCID: )", "Maintainer": "Dirk Eddelbuettel ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "RcppEigen": { @@ -556,6 +473,45 @@ "Repository": "CRAN", "Encoding": "UTF-8" }, + "S7": { + "Package": "S7", + "Version": "0.2.0", + "Source": "Repository", + "Title": "An Object Oriented System Meant to Become a Successor to S3 and S4", + "Authors@R": "c( person(\"Object-Oriented Programming Working Group\", role = \"cph\"), person(\"Davis\", \"Vaughan\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Tomasz\", \"Kalinowski\", role = \"aut\"), person(\"Will\", \"Landau\", role = \"aut\"), person(\"Michael\", \"Lawrence\", role = \"aut\"), person(\"Martin\", \"Maechler\", role = \"aut\", comment = c(ORCID = \"0000-0002-8685-9910\")), person(\"Luke\", \"Tierney\", role = \"aut\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-4757-117X\")) )", + "Description": "A new object oriented programming system designed to be a successor to S3 and S4. It includes formal class, generic, and method specification, and a limited form of multiple dispatch. It has been designed and implemented collaboratively by the R Consortium Object-Oriented Programming Working Group, which includes representatives from R-Core, 'Bioconductor', 'Posit'/'tidyverse', and the wider R community.", + "License": "MIT + file LICENSE", + "URL": "https://rconsortium.github.io/S7/, https://github.com/RConsortium/S7", + "BugReports": "https://github.com/RConsortium/S7/issues", + "Depends": [ + "R (>= 3.5.0)" + ], + "Imports": [ + "utils" + ], + "Suggests": [ + "bench", + "callr", + "covr", + "knitr", + "methods", + "rmarkdown", + "testthat (>= 3.2.0)", + "tibble" + ], + "VignetteBuilder": "knitr", + "Config/build/compilation-database": "true", + "Config/Needs/website": "sloop", + "Config/testthat/edition": "3", + "Config/testthat/parallel": "TRUE", + "Config/testthat/start-first": "external-generic", + "Encoding": "UTF-8", + "RoxygenNote": "7.3.2", + "NeedsCompilation": "yes", + "Author": "Object-Oriented Programming Working Group [cph], Davis Vaughan [aut], Jim Hester [aut] (), Tomasz Kalinowski [aut], Will Landau [aut], Michael Lawrence [aut], Martin Maechler [aut] (), Luke Tierney [aut], Hadley Wickham [aut, cre] ()", + "Maintainer": "Hadley Wickham ", + "Repository": "CRAN" + }, "SparseM": { "Package": "SparseM", "Version": "1.84-2", @@ -581,7 +537,7 @@ "URL": "http://www.econ.uiuc.edu/~roger/research/sparse/sparse.html", "NeedsCompilation": "yes", "Author": "Roger Koenker [cre, aut], Pin Tian Ng [ctb] (Contributions to Sparse QR code), Yousef Saad [ctb] (author of sparskit2), Ben Shaby [ctb] (author of chol2csr), Martin Maechler [ctb] (chol() tweaks; S4, )", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "abind": { @@ -622,7 +578,7 @@ "Maintainer": "Ravi Varadhan ", "License": "GPL (>= 2)", "LazyLoad": "yes", - "Repository": "CRAN", + "Repository": "RSPM", "NeedsCompilation": "no", "Encoding": "UTF-8" }, @@ -649,7 +605,7 @@ "MASS", "igraph" ], - "Repository": "CRAN", + "Repository": "RSPM", "RoxygenNote": "6.0.1" }, "arm": { @@ -906,16 +862,16 @@ ], "VignetteBuilder": "knitr", "Encoding": "UTF-8", - "LazyData": "true", "RoxygenNote": "7.3.2", - "Author": "Luke Miratrix [aut, cre], Nicole Pashley [aut]", - "Maintainer": "Luke Miratrix ", "RemoteType": "github", - "RemoteUsername": "lmiratrix", + "RemoteHost": "api.github.com", "RemoteRepo": "blkvar", + "RemoteUsername": "lmiratrix", "RemoteRef": "HEAD", - "RemoteSha": "d6cec2070a119f8490494f7ecbbe5e007a927bd3", - "RemoteHost": "api.github.com" + "RemoteSha": "60cf10e16a9960a3b0fe0c91adbe3671f604e040", + "NeedsCompilation": "no", + "Author": "Luke Miratrix [aut, cre], Nicole Pashley [aut]", + "Maintainer": "Luke Miratrix " }, "blob": { "Package": "blob", @@ -950,7 +906,7 @@ }, "bookdown": { "Package": "bookdown", - "Version": "0.44", + "Version": "0.45", "Source": "Repository", "Type": "Package", "Title": "Authoring Books and Technical Documents with R Markdown", @@ -1004,15 +960,15 @@ }, "boot": { "Package": "boot", - "Version": "1.3-31", + "Version": "1.3-32", "Source": "Repository", "Priority": "recommended", - "Date": "2024-08-28", - "Authors@R": "c(person(\"Angelo\", \"Canty\", role = \"aut\", email = \"cantya@mcmaster.ca\", comment = \"author of original code for S\"), person(\"Brian\", \"Ripley\", role = c(\"aut\", \"trl\"), email = \"ripley@stats.ox.ac.uk\", comment = \"conversion to R, maintainer 1999--2022, author of parallel support\"), person(\"Alessandra R.\", \"Brazzale\", role = c(\"ctb\", \"cre\"), email = \"brazzale@stat.unipd.it\", comment = \"minor bug fixes\"))", + "Date": "2025-08-29", + "Authors@R": "c(person(\"Angelo\", \"Canty\", role = \"aut\", email = \"cantya@mcmaster.ca\", comment = \"author of original code for S\"), person(\"Brian\", \"Ripley\", role = c(\"aut\", \"trl\"), email = \"Brian.Ripley@R-project.org\", comment = \"conversion to R, maintainer 1999--2022, author of parallel support\"), person(\"Alessandra R.\", \"Brazzale\", role = c(\"ctb\", \"cre\"), email = \"brazzale@stat.unipd.it\", comment = \"minor bug fixes\"))", "Maintainer": "Alessandra R. Brazzale ", "Note": "Maintainers are not available to give advice on using a package they did not author.", "Description": "Functions and datasets for bootstrapping from the book \"Bootstrap Methods and Their Application\" by A. C. Davison and D. V. Hinkley (1997, CUP), originally written by Angelo Canty for S.", - "Title": "Bootstrap Functions (Originally by Angelo Canty for S)", + "Title": "Bootstrap Functions", "Depends": [ "R (>= 3.0.0)", "graphics", @@ -1031,7 +987,7 @@ }, "brio": { "Package": "brio", - "Version": "1.1.4", + "Version": "1.1.5", "Source": "Repository", "Title": "Basic R Input Output", "Authors@R": "c( person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Gábor\", \"Csárdi\", , \"csardi.gabor@gmail.com\", role = c(\"aut\", \"cre\")), person(given = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -1053,11 +1009,11 @@ "NeedsCompilation": "yes", "Author": "Jim Hester [aut] (), Gábor Csárdi [aut, cre], Posit Software, PBC [cph, fnd]", "Maintainer": "Gábor Csárdi ", - "Repository": "RSPM" + "Repository": "CRAN" }, "broom": { "Package": "broom", - "Version": "1.0.9", + "Version": "1.0.10", "Source": "Repository", "Type": "Package", "Title": "Convert Statistical Objects into Tidy Tibbles", @@ -1156,6 +1112,7 @@ "spdep (>= 1.1)", "speedglm", "spelling", + "stats4", "survey", "survival (>= 3.6-4)", "systemfit", @@ -1170,7 +1127,7 @@ "Config/usethis/last-upkeep": "2025-04-25", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.3.2", + "RoxygenNote": "7.3.3", "Collate": "'aaa-documentation-helper.R' 'null-and-default.R' 'aer.R' 'auc.R' 'base.R' 'bbmle.R' 'betareg.R' 'biglm.R' 'bingroup.R' 'boot.R' 'broom-package.R' 'broom.R' 'btergm.R' 'car.R' 'caret.R' 'cluster.R' 'cmprsk.R' 'data-frame.R' 'deprecated-0-7-0.R' 'drc.R' 'emmeans.R' 'epiR.R' 'ergm.R' 'fixest.R' 'gam.R' 'geepack.R' 'glmnet-cv-glmnet.R' 'glmnet-glmnet.R' 'gmm.R' 'hmisc.R' 'import-standalone-obj-type.R' 'import-standalone-types-check.R' 'joinerml.R' 'kendall.R' 'ks.R' 'lavaan.R' 'leaps.R' 'lfe.R' 'list-irlba.R' 'list-optim.R' 'list-svd.R' 'list-xyz.R' 'list.R' 'lm-beta.R' 'lmodel2.R' 'lmtest.R' 'maps.R' 'margins.R' 'mass-fitdistr.R' 'mass-negbin.R' 'mass-polr.R' 'mass-ridgelm.R' 'stats-lm.R' 'mass-rlm.R' 'mclust.R' 'mediation.R' 'metafor.R' 'mfx.R' 'mgcv.R' 'mlogit.R' 'muhaz.R' 'multcomp.R' 'nnet.R' 'nobs.R' 'ordinal-clm.R' 'ordinal-clmm.R' 'plm.R' 'polca.R' 'psych.R' 'stats-nls.R' 'quantreg-nlrq.R' 'quantreg-rq.R' 'quantreg-rqs.R' 'robust-glmrob.R' 'robust-lmrob.R' 'robustbase-glmrob.R' 'robustbase-lmrob.R' 'sp.R' 'spdep.R' 'speedglm-speedglm.R' 'speedglm-speedlm.R' 'stats-anova.R' 'stats-arima.R' 'stats-decompose.R' 'stats-factanal.R' 'stats-glm.R' 'stats-htest.R' 'stats-kmeans.R' 'stats-loess.R' 'stats-mlm.R' 'stats-prcomp.R' 'stats-smooth.spline.R' 'stats-summary-lm.R' 'stats-time-series.R' 'survey.R' 'survival-aareg.R' 'survival-cch.R' 'survival-coxph.R' 'survival-pyears.R' 'survival-survdiff.R' 'survival-survexp.R' 'survival-survfit.R' 'survival-survreg.R' 'systemfit.R' 'tseries.R' 'utilities.R' 'vars.R' 'zoo.R' 'zzz.R'", "NeedsCompilation": "no", "Author": "David Robinson [aut], Alex Hayes [aut] (ORCID: ), Simon Couch [aut, cre] (ORCID: ), Posit Software, PBC [cph, fnd] (ROR: ), Indrajeet Patil [ctb] (ORCID: ), Derek Chiu [ctb], Matthieu Gomez [ctb], Boris Demeshev [ctb], Dieter Menne [ctb], Benjamin Nutter [ctb], Luke Johnston [ctb], Ben Bolker [ctb], Francois Briatte [ctb], Jeffrey Arnold [ctb], Jonah Gabry [ctb], Luciano Selzer [ctb], Gavin Simpson [ctb], Jens Preussner [ctb], Jay Hesselberth [ctb], Hadley Wickham [ctb], Matthew Lincoln [ctb], Alessandro Gasparini [ctb], Lukasz Komsta [ctb], Frederick Novometsky [ctb], Wilson Freitas [ctb], Michelle Evans [ctb], Jason Cory Brunson [ctb], Simon Jackson [ctb], Ben Whalley [ctb], Karissa Whiting [ctb], Yves Rosseel [ctb], Michael Kuehn [ctb], Jorge Cimentada [ctb], Erle Holgersen [ctb], Karl Dunkle Werner [ctb] (ORCID: ), Ethan Christensen [ctb], Steven Pav [ctb], Paul PJ [ctb], Ben Schneider [ctb], Patrick Kennedy [ctb], Lily Medina [ctb], Brian Fannin [ctb], Jason Muhlenkamp [ctb], Matt Lehman [ctb], Bill Denney [ctb] (ORCID: ), Nic Crane [ctb], Andrew Bates [ctb], Vincent Arel-Bundock [ctb] (ORCID: ), Hideaki Hayashi [ctb], Luis Tobalina [ctb], Annie Wang [ctb], Wei Yang Tham [ctb], Clara Wang [ctb], Abby Smith [ctb] (ORCID: ), Jasper Cooper [ctb] (ORCID: ), E Auden Krauska [ctb] (ORCID: ), Alex Wang [ctb], Malcolm Barrett [ctb] (ORCID: ), Charles Gray [ctb] (ORCID: ), Jared Wilber [ctb], Vilmantas Gegzna [ctb] (ORCID: ), Eduard Szoecs [ctb], Frederik Aust [ctb] (ORCID: ), Angus Moore [ctb], Nick Williams [ctb], Marius Barth [ctb] (ORCID: ), Bruna Wundervald [ctb] (ORCID: ), Joyce Cahoon [ctb] (ORCID: ), Grant McDermott [ctb] (ORCID: ), Kevin Zarca [ctb], Shiro Kuriwaki [ctb] (ORCID: ), Lukas Wallrich [ctb] (ORCID: ), James Martherus [ctb] (ORCID: ), Chuliang Xiao [ctb] (ORCID: ), Joseph Larmarange [ctb], Max Kuhn [ctb], Michal Bojanowski [ctb], Hakon Malmedal [ctb], Clara Wang [ctb], Sergio Oller [ctb], Luke Sonnet [ctb], Jim Hester [ctb], Ben Schneider [ctb], Bernie Gray [ctb] (ORCID: ), Mara Averick [ctb], Aaron Jacobs [ctb], Andreas Bender [ctb], Sven Templer [ctb], Paul-Christian Buerkner [ctb], Matthew Kay [ctb], Erwan Le Pennec [ctb], Johan Junkka [ctb], Hao Zhu [ctb], Benjamin Soltoff [ctb], Zoe Wilkinson Saldana [ctb], Tyler Littlefield [ctb], Charles T. Gray [ctb], Shabbh E. Banks [ctb], Serina Robinson [ctb], Roger Bivand [ctb], Riinu Ots [ctb], Nicholas Williams [ctb], Nina Jakobsen [ctb], Michael Weylandt [ctb], Lisa Lendway [ctb], Karl Hailperin [ctb], Josue Rodriguez [ctb], Jenny Bryan [ctb], Chris Jarvis [ctb], Greg Macfarlane [ctb], Brian Mannakee [ctb], Drew Tyre [ctb], Shreyas Singh [ctb], Laurens Geffert [ctb], Hong Ooi [ctb], Henrik Bengtsson [ctb], Eduard Szocs [ctb], David Hugh-Jones [ctb], Matthieu Stigler [ctb], Hugo Tavares [ctb] (ORCID: ), R. Willem Vervoort [ctb], Brenton M. Wiernik [ctb], Josh Yamamoto [ctb], Jasme Lee [ctb], Taren Sanders [ctb] (ORCID: ), Ilaria Prosdocimi [ctb] (ORCID: ), Daniel D. Sjoberg [ctb] (ORCID: ), Alex Reinhart [ctb] (ORCID: )", @@ -1300,92 +1257,6 @@ "Maintainer": "Gábor Csárdi ", "Repository": "CRAN" }, - "car": { - "Package": "car", - "Version": "3.1-3", - "Source": "Repository", - "Date": "2024-09-23", - "Title": "Companion to Applied Regression", - "Authors@R": "c(person(\"John\", \"Fox\", role = c(\"aut\", \"cre\"), email = \"jfox@mcmaster.ca\"), person(\"Sanford\", \"Weisberg\", role = \"aut\", email = \"sandy@umn.edu\"), person(\"Brad\", \"Price\", role = \"aut\", email = \"brad.price@mail.wvu.edu\"), person(\"Daniel\", \"Adler\", role=\"ctb\"), person(\"Douglas\", \"Bates\", role = \"ctb\"), person(\"Gabriel\", \"Baud-Bovy\", role = \"ctb\"), person(\"Ben\", \"Bolker\", role=\"ctb\"), person(\"Steve\", \"Ellison\", role=\"ctb\"), person(\"David\", \"Firth\", role = \"ctb\"), person(\"Michael\", \"Friendly\", role = \"ctb\"), person(\"Gregor\", \"Gorjanc\", role = \"ctb\"), person(\"Spencer\", \"Graves\", role = \"ctb\"), person(\"Richard\", \"Heiberger\", role = \"ctb\"), person(\"Pavel\", \"Krivitsky\", role = \"ctb\"), person(\"Rafael\", \"Laboissiere\", role = \"ctb\"), person(\"Martin\", \"Maechler\", role=\"ctb\"), person(\"Georges\", \"Monette\", role = \"ctb\"), person(\"Duncan\", \"Murdoch\", role=\"ctb\"), person(\"Henric\", \"Nilsson\", role = \"ctb\"), person(\"Derek\", \"Ogle\", role = \"ctb\"), person(\"Brian\", \"Ripley\", role = \"ctb\"), person(\"Tom\", \"Short\", role=\"ctb\"), person(\"William\", \"Venables\", role = \"ctb\"), person(\"Steve\", \"Walker\", role=\"ctb\"), person(\"David\", \"Winsemius\", role=\"ctb\"), person(\"Achim\", \"Zeileis\", role = \"ctb\"), person(\"R-Core\", role=\"ctb\"))", - "Depends": [ - "R (>= 3.5.0)", - "carData (>= 3.0-0)" - ], - "Imports": [ - "abind", - "Formula", - "MASS", - "mgcv", - "nnet", - "pbkrtest (>= 0.4-4)", - "quantreg", - "grDevices", - "utils", - "stats", - "graphics", - "lme4 (>= 1.1-27.1)", - "nlme", - "scales" - ], - "Suggests": [ - "alr4", - "boot", - "coxme", - "effects", - "knitr", - "leaps", - "lmtest", - "Matrix", - "MatrixModels", - "ordinal", - "plotrix", - "mvtnorm", - "rgl (>= 0.111.3)", - "rio", - "sandwich", - "SparseM", - "survival", - "survey" - ], - "ByteCompile": "yes", - "LazyLoad": "yes", - "Description": "Functions to Accompany J. Fox and S. Weisberg, An R Companion to Applied Regression, Third Edition, Sage, 2019.", - "License": "GPL (>= 2)", - "URL": "https://r-forge.r-project.org/projects/car/, https://CRAN.R-project.org/package=car, https://www.john-fox.ca/Companion/index.html", - "VignetteBuilder": "knitr", - "NeedsCompilation": "no", - "Author": "John Fox [aut, cre], Sanford Weisberg [aut], Brad Price [aut], Daniel Adler [ctb], Douglas Bates [ctb], Gabriel Baud-Bovy [ctb], Ben Bolker [ctb], Steve Ellison [ctb], David Firth [ctb], Michael Friendly [ctb], Gregor Gorjanc [ctb], Spencer Graves [ctb], Richard Heiberger [ctb], Pavel Krivitsky [ctb], Rafael Laboissiere [ctb], Martin Maechler [ctb], Georges Monette [ctb], Duncan Murdoch [ctb], Henric Nilsson [ctb], Derek Ogle [ctb], Brian Ripley [ctb], Tom Short [ctb], William Venables [ctb], Steve Walker [ctb], David Winsemius [ctb], Achim Zeileis [ctb], R-Core [ctb]", - "Maintainer": "John Fox ", - "Repository": "CRAN", - "Encoding": "UTF-8" - }, - "carData": { - "Package": "carData", - "Version": "3.0-5", - "Source": "Repository", - "Date": "2022-01-05", - "Title": "Companion to Applied Regression Data Sets", - "Authors@R": "c(person(\"John\", \"Fox\", role = c(\"aut\", \"cre\"), email = \"jfox@mcmaster.ca\"), person(\"Sanford\", \"Weisberg\", role = \"aut\", email = \"sandy@umn.edu\"), person(\"Brad\", \"Price\", role = \"aut\", email = \"brad.price@mail.wvu.edu\"))", - "Depends": [ - "R (>= 3.5.0)" - ], - "Suggests": [ - "car (>= 3.0-0)" - ], - "LazyLoad": "yes", - "LazyData": "yes", - "Description": "Datasets to Accompany J. Fox and S. Weisberg, An R Companion to Applied Regression, Third Edition, Sage (2019).", - "License": "GPL (>= 2)", - "URL": "https://r-forge.r-project.org/projects/car/, https://CRAN.R-project.org/package=carData, https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html", - "Author": "John Fox [aut, cre], Sanford Weisberg [aut], Brad Price [aut]", - "Maintainer": "John Fox ", - "Repository": "CRAN", - "Repository/R-Forge/Project": "car", - "Repository/R-Forge/Revision": "694", - "Repository/R-Forge/DateTimeStamp": "2022-01-05 19:40:37", - "NeedsCompilation": "no", - "Encoding": "UTF-8" - }, "cellranger": { "Package": "cellranger", "Version": "1.1.0", @@ -1498,9 +1369,10 @@ }, "cluster": { "Package": "cluster", - "Version": "2.1.6", + "Version": "2.1.8.1", "Source": "Repository", - "Date": "2023-11-30", + "VersionNote": "Last CRAN: 2.1.8 on 2024-12-10; 2.1.7 on 2024-12-06; 2.1.6 on 2023-11-30; 2.1.5 on 2023-11-27", + "Date": "2025-03-11", "Priority": "recommended", "Title": "\"Finding Groups in Data\": Cluster Analysis Extended Rousseeuw et al.", "Description": "Methods for Cluster analysis. Much extended the original from Peter Rousseeuw, Anja Struyf and Mia Hubert, based on Kaufman and Rousseeuw (1990) \"Finding Groups in Data\".", @@ -1535,8 +1407,7 @@ "URL": "https://svn.r-project.org/R-packages/trunk/cluster/", "NeedsCompilation": "yes", "Author": "Martin Maechler [aut, cre] (), Peter Rousseeuw [aut] (Fortran original, ), Anja Struyf [aut] (S original), Mia Hubert [aut] (S original, ), Kurt Hornik [trl, ctb] (port to R; maintenance(1999-2000), ), Matthias Studer [ctb], Pierre Roudier [ctb], Juan Gonzalez [ctb], Kamil Kozlowski [ctb], Erich Schubert [ctb] (fastpam options for pam(), ), Keefe Murphy [ctb] (volume.ellipsoid({d >= 3}))", - "Repository": "RSPM", - "Encoding": "UTF-8" + "Repository": "CRAN" }, "coda": { "Package": "coda", @@ -1632,59 +1503,9 @@ "License": "GPL (>= 3)", "URL": "https://strimmerlab.github.io/software/corpcor/", "NeedsCompilation": "no", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, - "cowplot": { - "Package": "cowplot", - "Version": "1.2.0", - "Source": "Repository", - "Title": "Streamlined Plot Theme and Plot Annotations for 'ggplot2'", - "Authors@R": "person( given = \"Claus O.\", family = \"Wilke\", role = c(\"aut\", \"cre\"), email = \"wilke@austin.utexas.edu\", comment = c(ORCID = \"0000-0002-7470-9261\") )", - "Description": "Provides various features that help with creating publication-quality figures with 'ggplot2', such as a set of themes, functions to align plots and arrange them into complex compound figures, and functions that make it easy to annotate plots and or mix plots with images. The package was originally written for internal use in the Wilke lab, hence the name (Claus O. Wilke's plot package). It has also been used extensively in the book Fundamentals of Data Visualization.", - "URL": "https://wilkelab.org/cowplot/", - "BugReports": "https://github.com/wilkelab/cowplot/issues", - "Depends": [ - "R (>= 3.5.0)" - ], - "Imports": [ - "ggplot2 (>= 3.5.2)", - "grid", - "gtable", - "grDevices", - "methods", - "rlang", - "scales" - ], - "License": "GPL-2", - "Suggests": [ - "Cairo", - "covr", - "dplyr", - "forcats", - "gridGraphics (>= 0.4-0)", - "knitr", - "lattice", - "magick", - "maps", - "PASWR", - "patchwork", - "rmarkdown", - "ragg", - "testthat (>= 1.0.0)", - "tidyr", - "vdiffr (>= 0.3.0)", - "VennDiagram" - ], - "VignetteBuilder": "knitr", - "Collate": "'add_sub.R' 'align_plots.R' 'as_grob.R' 'as_gtable.R' 'axis_canvas.R' 'cowplot.R' 'draw.R' 'get_plot_component.R' 'get_axes.R' 'get_titles.R' 'get_legend.R' 'get_panel.R' 'gtable.R' 'key_glyph.R' 'plot_grid.R' 'save.R' 'set_null_device.R' 'setup.R' 'stamp.R' 'themes.R' 'utils_ggplot2.R'", - "RoxygenNote": "7.3.2", - "Encoding": "UTF-8", - "NeedsCompilation": "no", - "Author": "Claus O. Wilke [aut, cre] (ORCID: )", - "Maintainer": "Claus O. Wilke ", - "Repository": "CRAN" - }, "cpp11": { "Package": "cpp11", "Version": "0.5.2", @@ -1764,7 +1585,7 @@ }, "curl": { "Package": "curl", - "Version": "6.4.0", + "Version": "7.0.0", "Source": "Repository", "Type": "Package", "Title": "A Modern and Flexible Web Client for R", @@ -1788,7 +1609,7 @@ "Depends": [ "R (>= 3.0.0)" ], - "RoxygenNote": "7.3.2.9000", + "RoxygenNote": "7.3.2", "Encoding": "UTF-8", "Language": "en-US", "NeedsCompilation": "yes", @@ -1832,7 +1653,7 @@ }, "dbplyr": { "Package": "dbplyr", - "Version": "2.5.0", + "Version": "2.5.1", "Source": "Repository", "Type": "Package", "Title": "A 'dplyr' Back End for Databases", @@ -1875,7 +1696,7 @@ "rmarkdown", "RPostgres (>= 1.4.5)", "RPostgreSQL", - "RSQLite (>= 2.3.1)", + "RSQLite (>= 2.3.8)", "testthat (>= 3.1.10)" ], "VignetteBuilder": "knitr", @@ -1884,7 +1705,7 @@ "Config/testthat/parallel": "TRUE", "Encoding": "UTF-8", "Language": "en-gb", - "RoxygenNote": "7.3.1", + "RoxygenNote": "7.3.3", "Collate": "'db-sql.R' 'utils-check.R' 'import-standalone-types-check.R' 'import-standalone-obj-type.R' 'utils.R' 'sql.R' 'escape.R' 'translate-sql-cut.R' 'translate-sql-quantile.R' 'translate-sql-string.R' 'translate-sql-paste.R' 'translate-sql-helpers.R' 'translate-sql-window.R' 'translate-sql-conditional.R' 'backend-.R' 'backend-access.R' 'backend-hana.R' 'backend-hive.R' 'backend-impala.R' 'verb-copy-to.R' 'backend-mssql.R' 'backend-mysql.R' 'backend-odbc.R' 'backend-oracle.R' 'backend-postgres.R' 'backend-postgres-old.R' 'backend-redshift.R' 'backend-snowflake.R' 'backend-spark-sql.R' 'backend-sqlite.R' 'backend-teradata.R' 'build-sql.R' 'data-cache.R' 'data-lahman.R' 'data-nycflights13.R' 'db-escape.R' 'db-io.R' 'db.R' 'dbplyr.R' 'explain.R' 'ident.R' 'import-standalone-s3-register.R' 'join-by-compat.R' 'join-cols-compat.R' 'lazy-join-query.R' 'lazy-ops.R' 'lazy-query.R' 'lazy-select-query.R' 'lazy-set-op-query.R' 'memdb.R' 'optimise-utils.R' 'pillar.R' 'progress.R' 'sql-build.R' 'query-join.R' 'query-select.R' 'query-semi-join.R' 'query-set-op.R' 'query.R' 'reexport.R' 'remote.R' 'rows.R' 'schema.R' 'simulate.R' 'sql-clause.R' 'sql-expr.R' 'src-sql.R' 'src_dbi.R' 'table-name.R' 'tbl-lazy.R' 'tbl-sql.R' 'test-frame.R' 'testthat.R' 'tidyeval-across.R' 'tidyeval.R' 'translate-sql.R' 'utils-format.R' 'verb-arrange.R' 'verb-compute.R' 'verb-count.R' 'verb-distinct.R' 'verb-do-query.R' 'verb-do.R' 'verb-expand.R' 'verb-fill.R' 'verb-filter.R' 'verb-group_by.R' 'verb-head.R' 'verb-joins.R' 'verb-mutate.R' 'verb-pivot-longer.R' 'verb-pivot-wider.R' 'verb-pull.R' 'verb-select.R' 'verb-set-ops.R' 'verb-slice.R' 'verb-summarise.R' 'verb-uncount.R' 'verb-window.R' 'zzz.R'", "NeedsCompilation": "no", "Author": "Hadley Wickham [aut, cre], Maximilian Girlich [aut], Edgar Ruiz [aut], Posit Software, PBC [cph, fnd]", @@ -1927,7 +1748,7 @@ "Collate": "'assertions.R' 'authors-at-r.R' 'built.R' 'classes.R' 'collate.R' 'constants.R' 'deps.R' 'desc-package.R' 'description.R' 'encoding.R' 'find-package-root.R' 'latex.R' 'non-oo-api.R' 'package-archives.R' 'read.R' 'remotes.R' 'str.R' 'syntax_checks.R' 'urls.R' 'utils.R' 'validate.R' 'version.R'", "NeedsCompilation": "no", "Author": "Gábor Csárdi [aut, cre], Kirill Müller [aut], Jim Hester [aut], Maëlle Salmon [ctb] (), Posit Software, PBC [cph, fnd]", - "Repository": "RSPM" + "Repository": "CRAN" }, "diagonals": { "Package": "diagonals", @@ -1953,7 +1774,7 @@ "NeedsCompilation": "no", "Author": "Bastiaan Quast [aut, cre] ()", "Maintainer": "Bastiaan Quast ", - "Repository": "CRAN" + "Repository": "RSPM" }, "diagram": { "Package": "diagram", @@ -1974,12 +1795,12 @@ "License": "GPL (>= 2)", "LazyData": "yes", "NeedsCompilation": "no", - "Repository": "RSPM", + "Repository": "CRAN", "Encoding": "UTF-8" }, "diffobj": { "Package": "diffobj", - "Version": "0.3.5", + "Version": "0.3.6", "Source": "Repository", "Type": "Package", "Title": "Diffs for R Objects", @@ -1991,7 +1812,7 @@ "License": "GPL-2 | GPL-3", "URL": "https://github.com/brodieG/diffobj", "BugReports": "https://github.com/brodieG/diffobj/issues", - "RoxygenNote": "7.1.1", + "RoxygenNote": "7.2.3", "VignetteBuilder": "knitr", "Encoding": "UTF-8", "Suggests": [ @@ -2009,7 +1830,7 @@ "NeedsCompilation": "yes", "Author": "Brodie Gaslam [aut, cre], Michael B. Allen [ctb, cph] (Original C implementation of Myers Diff Algorithm)", "Maintainer": "Brodie Gaslam ", - "Repository": "RSPM" + "Repository": "CRAN" }, "digest": { "Package": "digest", @@ -2076,55 +1897,7 @@ "NeedsCompilation": "no", "Author": "Mitchell O'Hara-Wild [aut, cre] (), Matthew Kay [aut] (), Alex Hayes [aut] (), Rob Hyndman [aut] (), Earo Wang [ctb] (), Vencislav Popov [ctb] ()", "Maintainer": "Mitchell O'Hara-Wild ", - "Repository": "CRAN" - }, - "doBy": { - "Package": "doBy", - "Version": "4.7.0", - "Source": "Repository", - "Title": "Groupwise Statistics, LSmeans, Linear Estimates, Utilities", - "Authors@R": "c( person(given = \"Ulrich\", family = \"Halekoh\", email = \"uhalekoh@health.sdu.dk\", role = c(\"aut\", \"cph\")), person(given = \"Søren\", family = \"Højsgaard\", email = \"sorenh@math.aau.dk\", role = c(\"aut\", \"cre\", \"cph\")) )", - "Description": "Utility package containing: Main categories: Working with grouped data: 'do' something to data when stratified 'by' some variables. General linear estimates. Data handling utilities. Functional programming, in particular restrict functions to a smaller domain. Miscellaneous functions for data handling. Model stability in connection with model selection. Miscellaneous other tools.", - "Encoding": "UTF-8", - "VignetteBuilder": "knitr", - "LazyData": "true", - "LazyDataCompression": "xz", - "URL": "https://github.com/hojsgaard/doBy", - "License": "GPL (>= 2)", - "Depends": [ - "R (>= 4.2.0)", - "methods" - ], - "Imports": [ - "boot", - "broom", - "cowplot", - "Deriv", - "dplyr", - "ggplot2", - "MASS", - "Matrix", - "modelr", - "microbenchmark", - "rlang", - "tibble", - "tidyr" - ], - "Suggests": [ - "geepack", - "knitr", - "lme4", - "markdown", - "multcomp", - "pbkrtest (>= 0.5.2)", - "survival", - "testthat (>= 2.1.0)" - ], - "RoxygenNote": "7.3.2", - "NeedsCompilation": "no", - "Author": "Ulrich Halekoh [aut, cph], Søren Højsgaard [aut, cre, cph]", - "Maintainer": "Søren Højsgaard ", - "Repository": "CRAN" + "Repository": "RSPM" }, "doParallel": { "Package": "doParallel", @@ -2156,7 +1929,7 @@ "NeedsCompilation": "no", "Author": "Folashade Daniel [cre], Microsoft Corporation [aut, cph], Steve Weston [aut], Dan Tenenbaum [ctb]", "Maintainer": "Folashade Daniel ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "dplyr": { @@ -2224,7 +1997,7 @@ }, "dtplyr": { "Package": "dtplyr", - "Version": "1.3.1", + "Version": "1.3.2", "Source": "Repository", "Title": "Data Table Back-End for 'dplyr'", "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"cre\", \"aut\")), person(\"Maximilian\", \"Girlich\", role = \"aut\"), person(\"Mark\", \"Fairbanks\", role = \"aut\"), person(\"Ryan\", \"Dickerson\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -2233,7 +2006,7 @@ "URL": "https://dtplyr.tidyverse.org, https://github.com/tidyverse/dtplyr", "BugReports": "https://github.com/tidyverse/dtplyr/issues", "Depends": [ - "R (>= 3.3)" + "R (>= 4.0)" ], "Imports": [ "cli (>= 3.4.0)", @@ -2259,7 +2032,7 @@ "Config/Needs/website": "tidyverse/tidytemplate", "Config/testthat/edition": "3", "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2.9000", "NeedsCompilation": "no", "Author": "Hadley Wickham [cre, aut], Maximilian Girlich [aut], Mark Fairbanks [aut], Ryan Dickerson [aut], Posit Software, PBC [cph, fnd]", "Maintainer": "Hadley Wickham ", @@ -2315,11 +2088,11 @@ "NeedsCompilation": "yes", "Author": "Graeme Blair [aut, cre], Jasper Cooper [aut], Alexander Coppock [aut], Macartan Humphreys [aut], Luke Sonnet [aut], Neal Fultz [ctb], Lily Medina [ctb], Russell Lenth [ctb], Molly Offer-Westort [ctb]", "Maintainer": "Graeme Blair ", - "Repository": "CRAN" + "Repository": "RSPM" }, "evaluate": { "Package": "evaluate", - "Version": "1.0.4", + "Version": "1.0.5", "Source": "Repository", "Type": "Package", "Title": "Parsing and Evaluation Tools that Provide More Details than the Default", @@ -2382,7 +2155,7 @@ "NeedsCompilation": "yes", "Author": "Martin Maechler [aut, cre] (), Christophe Dutang [aut] (), Vincent Goulet [aut] (), Douglas Bates [ctb] (cosmetic clean up, in svn r42), David Firth [ctb] (expm(method= \"PadeO\" and \"TaylorO\")), Marina Shapira [ctb] (expm(method= \"PadeO\" and \"TaylorO\")), Michael Stadelmann [ctb] (\"Higham08*\" methods, see ?expm.Higham08...)", "Maintainer": "Martin Maechler ", - "Repository": "CRAN" + "Repository": "RSPM" }, "farver": { "Package": "farver", @@ -2464,16 +2237,16 @@ }, "forcats": { "Package": "forcats", - "Version": "1.0.0", + "Version": "1.0.1", "Source": "Repository", "Title": "Tools for Working with Categorical Variables (Factors)", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@rstudio.com\", role = c(\"aut\", \"cre\")), person(\"RStudio\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Description": "Helpers for reordering factor levels (including moving specified levels to front, ordering by first appearance, reversing, and randomly shuffling), and tools for modifying factor levels (including collapsing rare levels into other, 'anonymising', and manually 'recoding').", "License": "MIT + file LICENSE", "URL": "https://forcats.tidyverse.org/, https://github.com/tidyverse/forcats", "BugReports": "https://github.com/tidyverse/forcats/issues", "Depends": [ - "R (>= 3.4)" + "R (>= 4.1)" ], "Imports": [ "cli (>= 3.4.0)", @@ -2498,10 +2271,10 @@ "Config/testthat/edition": "3", "Encoding": "UTF-8", "LazyData": "true", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.3", "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut, cre], RStudio [cph, fnd]", - "Maintainer": "Hadley Wickham ", + "Author": "Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd] (ROR: )", + "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, "foreach": { @@ -2656,7 +2429,7 @@ "NeedsCompilation": "no", "Author": "Davis Vaughan [aut, cre], Matt Dancho [aut], RStudio [cph, fnd]", "Maintainer": "Davis Vaughan ", - "Repository": "RSPM" + "Repository": "CRAN" }, "future": { "Package": "future", @@ -2699,7 +2472,7 @@ }, "gargle": { "Package": "gargle", - "Version": "1.5.2", + "Version": "1.6.0", "Source": "Repository", "Title": "Utilities for Working with Google APIs", "Authors@R": "c( person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Craig\", \"Citro\", , \"craigcitro@google.com\", role = \"aut\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Google Inc\", role = \"cph\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -2716,7 +2489,7 @@ "glue (>= 1.3.0)", "httr (>= 1.4.5)", "jsonlite", - "lifecycle", + "lifecycle (>= 0.2.0)", "openssl", "rappdirs", "rlang (>= 1.1.0)", @@ -2740,9 +2513,9 @@ "Config/testthat/edition": "3", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2.9000", "NeedsCompilation": "no", - "Author": "Jennifer Bryan [aut, cre] (), Craig Citro [aut], Hadley Wickham [aut] (), Google Inc [cph], Posit Software, PBC [cph, fnd]", + "Author": "Jennifer Bryan [aut, cre] (ORCID: ), Craig Citro [aut], Hadley Wickham [aut] (ORCID: ), Google Inc [cph], Posit Software, PBC [cph, fnd]", "Maintainer": "Jennifer Bryan ", "Repository": "CRAN" }, @@ -2778,11 +2551,6 @@ "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, - "ggbeeswarm": { - "Package": "ggbeeswarm", - "Version": "0.7.2", - "Source": "Repository" - }, "ggdist": { "Package": "ggdist", "Version": "3.3.3", @@ -2848,39 +2616,37 @@ ], "NeedsCompilation": "yes", "Author": "Matthew Kay [aut, cre], Brenton M. Wiernik [ctb]", - "Repository": "CRAN" + "Repository": "RSPM" }, "ggplot2": { "Package": "ggplot2", - "Version": "3.5.2", + "Version": "4.0.0", "Source": "Repository", "Title": "Create Elegant Data Visualisations Using the Grammar of Graphics", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Winston\", \"Chang\", role = \"aut\", comment = c(ORCID = \"0000-0002-1576-2126\")), person(\"Lionel\", \"Henry\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Kohske\", \"Takahashi\", role = \"aut\"), person(\"Claus\", \"Wilke\", role = \"aut\", comment = c(ORCID = \"0000-0002-7470-9261\")), person(\"Kara\", \"Woo\", role = \"aut\", comment = c(ORCID = \"0000-0002-5125-4188\")), person(\"Hiroaki\", \"Yutani\", role = \"aut\", comment = c(ORCID = \"0000-0002-3385-7233\")), person(\"Dewey\", \"Dunnington\", role = \"aut\", comment = c(ORCID = \"0000-0002-9415-4582\")), person(\"Teun\", \"van den Brand\", role = \"aut\", comment = c(ORCID = \"0000-0002-9335-7468\")), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Winston\", \"Chang\", role = \"aut\", comment = c(ORCID = \"0000-0002-1576-2126\")), person(\"Lionel\", \"Henry\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Kohske\", \"Takahashi\", role = \"aut\"), person(\"Claus\", \"Wilke\", role = \"aut\", comment = c(ORCID = \"0000-0002-7470-9261\")), person(\"Kara\", \"Woo\", role = \"aut\", comment = c(ORCID = \"0000-0002-5125-4188\")), person(\"Hiroaki\", \"Yutani\", role = \"aut\", comment = c(ORCID = \"0000-0002-3385-7233\")), person(\"Dewey\", \"Dunnington\", role = \"aut\", comment = c(ORCID = \"0000-0002-9415-4582\")), person(\"Teun\", \"van den Brand\", role = \"aut\", comment = c(ORCID = \"0000-0002-9335-7468\")), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Description": "A system for 'declaratively' creating graphics, based on \"The Grammar of Graphics\". You provide the data, tell 'ggplot2' how to map variables to aesthetics, what graphical primitives to use, and it takes care of the details.", "License": "MIT + file LICENSE", "URL": "https://ggplot2.tidyverse.org, https://github.com/tidyverse/ggplot2", "BugReports": "https://github.com/tidyverse/ggplot2/issues", "Depends": [ - "R (>= 3.5)" + "R (>= 4.1)" ], "Imports": [ "cli", - "glue", "grDevices", "grid", - "gtable (>= 0.1.1)", + "gtable (>= 0.3.6)", "isoband", "lifecycle (> 1.0.1)", - "MASS", - "mgcv", "rlang (>= 1.1.0)", - "scales (>= 1.3.0)", + "S7", + "scales (>= 1.4.0)", "stats", - "tibble", "vctrs (>= 0.6.0)", "withr (>= 2.5.0)" ], "Suggests": [ + "broom", "covr", "dplyr", "ggplot2movies", @@ -2889,6 +2655,8 @@ "knitr", "mapproj", "maps", + "MASS", + "mgcv", "multcomp", "munsell", "nlme", @@ -2897,10 +2665,12 @@ "ragg (>= 1.2.6)", "RColorBrewer", "rmarkdown", + "roxygen2", "rpart", "sf (>= 0.7-3)", "svglite (>= 2.1.2)", - "testthat (>= 3.1.2)", + "testthat (>= 3.1.5)", + "tibble", "vdiffr (>= 1.0.6)", "xml2" ], @@ -2910,12 +2680,13 @@ "VignetteBuilder": "knitr", "Config/Needs/website": "ggtext, tidyr, forcats, tidyverse/tidytemplate", "Config/testthat/edition": "3", + "Config/usethis/last-upkeep": "2025-04-23", "Encoding": "UTF-8", "LazyData": "true", "RoxygenNote": "7.3.2", - "Collate": "'ggproto.R' 'ggplot-global.R' 'aaa-.R' 'aes-colour-fill-alpha.R' 'aes-evaluation.R' 'aes-group-order.R' 'aes-linetype-size-shape.R' 'aes-position.R' 'compat-plyr.R' 'utilities.R' 'aes.R' 'utilities-checks.R' 'legend-draw.R' 'geom-.R' 'annotation-custom.R' 'annotation-logticks.R' 'geom-polygon.R' 'geom-map.R' 'annotation-map.R' 'geom-raster.R' 'annotation-raster.R' 'annotation.R' 'autolayer.R' 'autoplot.R' 'axis-secondary.R' 'backports.R' 'bench.R' 'bin.R' 'coord-.R' 'coord-cartesian-.R' 'coord-fixed.R' 'coord-flip.R' 'coord-map.R' 'coord-munch.R' 'coord-polar.R' 'coord-quickmap.R' 'coord-radial.R' 'coord-sf.R' 'coord-transform.R' 'data.R' 'docs_layer.R' 'facet-.R' 'facet-grid-.R' 'facet-null.R' 'facet-wrap.R' 'fortify-lm.R' 'fortify-map.R' 'fortify-multcomp.R' 'fortify-spatial.R' 'fortify.R' 'stat-.R' 'geom-abline.R' 'geom-rect.R' 'geom-bar.R' 'geom-bin2d.R' 'geom-blank.R' 'geom-boxplot.R' 'geom-col.R' 'geom-path.R' 'geom-contour.R' 'geom-count.R' 'geom-crossbar.R' 'geom-segment.R' 'geom-curve.R' 'geom-defaults.R' 'geom-ribbon.R' 'geom-density.R' 'geom-density2d.R' 'geom-dotplot.R' 'geom-errorbar.R' 'geom-errorbarh.R' 'geom-freqpoly.R' 'geom-function.R' 'geom-hex.R' 'geom-histogram.R' 'geom-hline.R' 'geom-jitter.R' 'geom-label.R' 'geom-linerange.R' 'geom-point.R' 'geom-pointrange.R' 'geom-quantile.R' 'geom-rug.R' 'geom-sf.R' 'geom-smooth.R' 'geom-spoke.R' 'geom-text.R' 'geom-tile.R' 'geom-violin.R' 'geom-vline.R' 'ggplot2-package.R' 'grob-absolute.R' 'grob-dotstack.R' 'grob-null.R' 'grouping.R' 'theme-elements.R' 'guide-.R' 'guide-axis.R' 'guide-axis-logticks.R' 'guide-axis-stack.R' 'guide-axis-theta.R' 'guide-legend.R' 'guide-bins.R' 'guide-colorbar.R' 'guide-colorsteps.R' 'guide-custom.R' 'layer.R' 'guide-none.R' 'guide-old.R' 'guides-.R' 'guides-grid.R' 'hexbin.R' 'import-standalone-obj-type.R' 'import-standalone-types-check.R' 'labeller.R' 'labels.R' 'layer-sf.R' 'layout.R' 'limits.R' 'margins.R' 'performance.R' 'plot-build.R' 'plot-construction.R' 'plot-last.R' 'plot.R' 'position-.R' 'position-collide.R' 'position-dodge.R' 'position-dodge2.R' 'position-identity.R' 'position-jitter.R' 'position-jitterdodge.R' 'position-nudge.R' 'position-stack.R' 'quick-plot.R' 'reshape-add-margins.R' 'save.R' 'scale-.R' 'scale-alpha.R' 'scale-binned.R' 'scale-brewer.R' 'scale-colour.R' 'scale-continuous.R' 'scale-date.R' 'scale-discrete-.R' 'scale-expansion.R' 'scale-gradient.R' 'scale-grey.R' 'scale-hue.R' 'scale-identity.R' 'scale-linetype.R' 'scale-linewidth.R' 'scale-manual.R' 'scale-shape.R' 'scale-size.R' 'scale-steps.R' 'scale-type.R' 'scale-view.R' 'scale-viridis.R' 'scales-.R' 'stat-align.R' 'stat-bin.R' 'stat-bin2d.R' 'stat-bindot.R' 'stat-binhex.R' 'stat-boxplot.R' 'stat-contour.R' 'stat-count.R' 'stat-density-2d.R' 'stat-density.R' 'stat-ecdf.R' 'stat-ellipse.R' 'stat-function.R' 'stat-identity.R' 'stat-qq-line.R' 'stat-qq.R' 'stat-quantilemethods.R' 'stat-sf-coordinates.R' 'stat-sf.R' 'stat-smooth-methods.R' 'stat-smooth.R' 'stat-sum.R' 'stat-summary-2d.R' 'stat-summary-bin.R' 'stat-summary-hex.R' 'stat-summary.R' 'stat-unique.R' 'stat-ydensity.R' 'summarise-plot.R' 'summary.R' 'theme.R' 'theme-defaults.R' 'theme-current.R' 'utilities-break.R' 'utilities-grid.R' 'utilities-help.R' 'utilities-matrix.R' 'utilities-patterns.R' 'utilities-resolution.R' 'utilities-tidy-eval.R' 'zxx.R' 'zzz.R'", + "Collate": "'ggproto.R' 'ggplot-global.R' 'aaa-.R' 'aes-colour-fill-alpha.R' 'aes-evaluation.R' 'aes-group-order.R' 'aes-linetype-size-shape.R' 'aes-position.R' 'all-classes.R' 'compat-plyr.R' 'utilities.R' 'aes.R' 'annotation-borders.R' 'utilities-checks.R' 'legend-draw.R' 'geom-.R' 'annotation-custom.R' 'annotation-logticks.R' 'scale-type.R' 'layer.R' 'make-constructor.R' 'geom-polygon.R' 'geom-map.R' 'annotation-map.R' 'geom-raster.R' 'annotation-raster.R' 'annotation.R' 'autolayer.R' 'autoplot.R' 'axis-secondary.R' 'backports.R' 'bench.R' 'bin.R' 'coord-.R' 'coord-cartesian-.R' 'coord-fixed.R' 'coord-flip.R' 'coord-map.R' 'coord-munch.R' 'coord-polar.R' 'coord-quickmap.R' 'coord-radial.R' 'coord-sf.R' 'coord-transform.R' 'data.R' 'docs_layer.R' 'facet-.R' 'facet-grid-.R' 'facet-null.R' 'facet-wrap.R' 'fortify-map.R' 'fortify-models.R' 'fortify-spatial.R' 'fortify.R' 'stat-.R' 'geom-abline.R' 'geom-rect.R' 'geom-bar.R' 'geom-tile.R' 'geom-bin2d.R' 'geom-blank.R' 'geom-boxplot.R' 'geom-col.R' 'geom-path.R' 'geom-contour.R' 'geom-point.R' 'geom-count.R' 'geom-crossbar.R' 'geom-segment.R' 'geom-curve.R' 'geom-defaults.R' 'geom-ribbon.R' 'geom-density.R' 'geom-density2d.R' 'geom-dotplot.R' 'geom-errorbar.R' 'geom-freqpoly.R' 'geom-function.R' 'geom-hex.R' 'geom-histogram.R' 'geom-hline.R' 'geom-jitter.R' 'geom-label.R' 'geom-linerange.R' 'geom-pointrange.R' 'geom-quantile.R' 'geom-rug.R' 'geom-sf.R' 'geom-smooth.R' 'geom-spoke.R' 'geom-text.R' 'geom-violin.R' 'geom-vline.R' 'ggplot2-package.R' 'grob-absolute.R' 'grob-dotstack.R' 'grob-null.R' 'grouping.R' 'properties.R' 'margins.R' 'theme-elements.R' 'guide-.R' 'guide-axis.R' 'guide-axis-logticks.R' 'guide-axis-stack.R' 'guide-axis-theta.R' 'guide-legend.R' 'guide-bins.R' 'guide-colorbar.R' 'guide-colorsteps.R' 'guide-custom.R' 'guide-none.R' 'guide-old.R' 'guides-.R' 'guides-grid.R' 'hexbin.R' 'import-standalone-obj-type.R' 'import-standalone-types-check.R' 'labeller.R' 'labels.R' 'layer-sf.R' 'layout.R' 'limits.R' 'performance.R' 'plot-build.R' 'plot-construction.R' 'plot-last.R' 'plot.R' 'position-.R' 'position-collide.R' 'position-dodge.R' 'position-dodge2.R' 'position-identity.R' 'position-jitter.R' 'position-jitterdodge.R' 'position-nudge.R' 'position-stack.R' 'quick-plot.R' 'reshape-add-margins.R' 'save.R' 'scale-.R' 'scale-alpha.R' 'scale-binned.R' 'scale-brewer.R' 'scale-colour.R' 'scale-continuous.R' 'scale-date.R' 'scale-discrete-.R' 'scale-expansion.R' 'scale-gradient.R' 'scale-grey.R' 'scale-hue.R' 'scale-identity.R' 'scale-linetype.R' 'scale-linewidth.R' 'scale-manual.R' 'scale-shape.R' 'scale-size.R' 'scale-steps.R' 'scale-view.R' 'scale-viridis.R' 'scales-.R' 'stat-align.R' 'stat-bin.R' 'stat-summary-2d.R' 'stat-bin2d.R' 'stat-bindot.R' 'stat-binhex.R' 'stat-boxplot.R' 'stat-connect.R' 'stat-contour.R' 'stat-count.R' 'stat-density-2d.R' 'stat-density.R' 'stat-ecdf.R' 'stat-ellipse.R' 'stat-function.R' 'stat-identity.R' 'stat-manual.R' 'stat-qq-line.R' 'stat-qq.R' 'stat-quantilemethods.R' 'stat-sf-coordinates.R' 'stat-sf.R' 'stat-smooth-methods.R' 'stat-smooth.R' 'stat-sum.R' 'stat-summary-bin.R' 'stat-summary-hex.R' 'stat-summary.R' 'stat-unique.R' 'stat-ydensity.R' 'summarise-plot.R' 'summary.R' 'theme.R' 'theme-defaults.R' 'theme-current.R' 'theme-sub.R' 'utilities-break.R' 'utilities-grid.R' 'utilities-help.R' 'utilities-patterns.R' 'utilities-resolution.R' 'utilities-tidy-eval.R' 'zxx.R' 'zzz.R'", "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut] (), Winston Chang [aut] (), Lionel Henry [aut], Thomas Lin Pedersen [aut, cre] (), Kohske Takahashi [aut], Claus Wilke [aut] (), Kara Woo [aut] (), Hiroaki Yutani [aut] (), Dewey Dunnington [aut] (), Teun van den Brand [aut] (), Posit, PBC [cph, fnd]", + "Author": "Hadley Wickham [aut] (ORCID: ), Winston Chang [aut] (ORCID: ), Lionel Henry [aut], Thomas Lin Pedersen [aut, cre] (ORCID: ), Kohske Takahashi [aut], Claus Wilke [aut] (ORCID: ), Kara Woo [aut] (ORCID: ), Hiroaki Yutani [aut] (ORCID: ), Dewey Dunnington [aut] (ORCID: ), Teun van den Brand [aut] (ORCID: ), Posit, PBC [cph, fnd] (ROR: )", "Maintainer": "Thomas Lin Pedersen ", "Repository": "CRAN" }, @@ -2966,11 +2737,11 @@ "NeedsCompilation": "yes", "Author": "Kamil Slowikowski [aut, cre] (), Alicia Schep [ctb] (), Sean Hughes [ctb] (), Trung Kien Dang [ctb] (), Saulius Lukauskas [ctb], Jean-Olivier Irisson [ctb] (), Zhian N Kamvar [ctb] (), Thompson Ryan [ctb] (), Dervieux Christophe [ctb] (), Yutani Hiroaki [ctb], Pierre Gramme [ctb], Amir Masoud Abdol [ctb], Malcolm Barrett [ctb] (), Robrecht Cannoodt [ctb] (), Michał Krassowski [ctb] (), Michael Chirico [ctb] (), Pedro Aphalo [ctb] (), Francis Barton [ctb]", "Maintainer": "Kamil Slowikowski ", - "Repository": "CRAN" + "Repository": "RSPM" }, "ggridges": { "Package": "ggridges", - "Version": "0.5.6", + "Version": "0.5.7", "Source": "Repository", "Type": "Package", "Title": "Ridgeline Plots in 'ggplot2'", @@ -2982,7 +2753,7 @@ "R (>= 3.2)" ], "Imports": [ - "ggplot2 (>= 3.4.0)", + "ggplot2 (>= 3.5.0)", "grid (>= 3.0.0)", "scales (>= 0.4.1)", "withr (>= 2.1.1)" @@ -3002,12 +2773,12 @@ ], "VignetteBuilder": "knitr", "Collate": "'data.R' 'ggridges.R' 'geoms.R' 'geomsv.R' 'geoms-gradient.R' 'geom-density-line.R' 'position.R' 'scale-cyclical.R' 'scale-point.R' 'scale-vline.R' 'stats.R' 'theme.R' 'utils_ggplot2.R' 'utils.R'", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2", "Encoding": "UTF-8", "NeedsCompilation": "no", - "Author": "Claus O. Wilke [aut, cre] ()", + "Author": "Claus O. Wilke [aut, cre] (ORCID: )", "Maintainer": "Claus O. Wilke ", - "Repository": "CRAN" + "Repository": "RSPM" }, "glmnet": { "Package": "glmnet", @@ -3121,7 +2892,7 @@ }, "googledrive": { "Package": "googledrive", - "Version": "2.1.1", + "Version": "2.1.2", "Source": "Repository", "Title": "An Interface to Google Drive", "Authors@R": "c( person(\"Lucy\", \"D'Agostino McGowan\", , role = \"aut\"), person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -3130,11 +2901,11 @@ "URL": "https://googledrive.tidyverse.org, https://github.com/tidyverse/googledrive", "BugReports": "https://github.com/tidyverse/googledrive/issues", "Depends": [ - "R (>= 3.6)" + "R (>= 4.1)" ], "Imports": [ "cli (>= 3.0.0)", - "gargle (>= 1.5.0)", + "gargle (>= 1.6.0)", "glue (>= 1.4.2)", "httr", "jsonlite", @@ -3153,25 +2924,24 @@ "curl", "dplyr (>= 1.0.0)", "knitr", - "mockr", "rmarkdown", "spelling", - "testthat (>= 3.1.3)" + "testthat (>= 3.1.5)" ], "VignetteBuilder": "knitr", "Config/Needs/website": "tidyverse, tidyverse/tidytemplate", "Config/testthat/edition": "3", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.3", "NeedsCompilation": "no", - "Author": "Lucy D'Agostino McGowan [aut], Jennifer Bryan [aut, cre] (), Posit Software, PBC [cph, fnd]", + "Author": "Lucy D'Agostino McGowan [aut], Jennifer Bryan [aut, cre] (ORCID: ), Posit Software, PBC [cph, fnd]", "Maintainer": "Jennifer Bryan ", "Repository": "CRAN" }, "googlesheets4": { "Package": "googlesheets4", - "Version": "1.1.1", + "Version": "1.1.2", "Source": "Repository", "Title": "Access Google Sheets using the Sheets API V4", "Authors@R": "c( person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -3186,7 +2956,7 @@ "cellranger", "cli (>= 3.0.0)", "curl", - "gargle (>= 1.5.0)", + "gargle (>= 1.6.0)", "glue (>= 1.3.0)", "googledrive (>= 2.1.0)", "httr", @@ -3213,9 +2983,9 @@ "Config/testthat/edition": "3", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2.9000", "NeedsCompilation": "no", - "Author": "Jennifer Bryan [cre, aut] (), Posit Software, PBC [cph, fnd]", + "Author": "Jennifer Bryan [cre, aut] (ORCID: ), Posit Software, PBC [cph, fnd]", "Maintainer": "Jennifer Bryan ", "Repository": "CRAN" }, @@ -3247,7 +3017,7 @@ "NeedsCompilation": "no", "Author": "Baptiste Auguie [aut, cre], Anton Antonov [ctb]", "Maintainer": "Baptiste Auguie ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "gtable": { @@ -3341,17 +3111,17 @@ }, "here": { "Package": "here", - "Version": "1.0.1", + "Version": "1.0.2", "Source": "Repository", "Title": "A Simpler Way to Find Your Files", - "Date": "2020-12-13", - "Authors@R": "c(person(given = \"Kirill\", family = \"M\\u00fcller\", role = c(\"aut\", \"cre\"), email = \"krlmlr+r@mailbox.org\", comment = c(ORCID = \"0000-0002-1416-3412\")), person(given = \"Jennifer\", family = \"Bryan\", role = \"ctb\", email = \"jenny@rstudio.com\", comment = c(ORCID = \"0000-0002-6983-2759\")))", + "Date": "2025-09-06", + "Authors@R": "c(person(given = \"Kirill\", family = \"M\\u00fcller\", role = c(\"aut\", \"cre\"), email = \"kirill@cynkra.com\", comment = c(ORCID = \"0000-0002-1416-3412\")), person(given = \"Jennifer\", family = \"Bryan\", role = \"ctb\", email = \"jenny@rstudio.com\", comment = c(ORCID = \"0000-0002-6983-2759\")))", "Description": "Constructs paths to your project's files. Declare the relative path of a file within your project with 'i_am()'. Use the 'here()' function as a drop-in replacement for 'file.path()', it will always locate the files relative to your project root.", "License": "MIT + file LICENSE", "URL": "https://here.r-lib.org/, https://github.com/r-lib/here", "BugReports": "https://github.com/r-lib/here/issues", "Imports": [ - "rprojroot (>= 2.0.2)" + "rprojroot (>= 2.1.0)" ], "Suggests": [ "conflicted", @@ -3369,13 +3139,13 @@ ], "VignetteBuilder": "knitr", "Encoding": "UTF-8", - "LazyData": "true", - "RoxygenNote": "7.1.1.9000", + "RoxygenNote": "7.3.3.9000", "Config/testthat/edition": "3", + "Config/Needs/website": "tidyverse/tidytemplate", "NeedsCompilation": "no", - "Author": "Kirill Müller [aut, cre] (), Jennifer Bryan [ctb] ()", - "Maintainer": "Kirill Müller ", - "Repository": "RSPM" + "Author": "Kirill Müller [aut, cre] (ORCID: ), Jennifer Bryan [ctb] (ORCID: )", + "Maintainer": "Kirill Müller ", + "Repository": "CRAN" }, "highr": { "Package": "highr", @@ -3409,13 +3179,17 @@ }, "hms": { "Package": "hms", - "Version": "1.1.3", + "Version": "1.1.4", "Source": "Repository", "Title": "Pretty Time of Day", - "Date": "2023-03-21", - "Authors@R": "c( person(\"Kirill\", \"Müller\", role = c(\"aut\", \"cre\"), email = \"kirill@cynkra.com\", comment = c(ORCID = \"0000-0002-1416-3412\")), person(\"R Consortium\", role = \"fnd\"), person(\"RStudio\", role = \"fnd\") )", + "Date": "2025-10-11", + "Authors@R": "c( person(\"Kirill\", \"Müller\", , \"kirill@cynkra.com\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-1416-3412\")), person(\"R Consortium\", role = \"fnd\"), person(\"Posit Software, PBC\", role = \"fnd\", comment = c(ROR = \"03wc8by49\")) )", "Description": "Implements an S3 class for storing and formatting time-of-day values, based on the 'difftime' class.", + "License": "MIT + file LICENSE", + "URL": "https://hms.tidyverse.org/, https://github.com/tidyverse/hms", + "BugReports": "https://github.com/tidyverse/hms/issues", "Imports": [ + "cli", "lifecycle", "methods", "pkgconfig", @@ -3428,17 +3202,12 @@ "pillar (>= 1.1.0)", "testthat (>= 3.0.0)" ], - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "URL": "https://hms.tidyverse.org/, https://github.com/tidyverse/hms", - "BugReports": "https://github.com/tidyverse/hms/issues", - "RoxygenNote": "7.2.3", - "Config/testthat/edition": "3", - "Config/autostyle/scope": "line_breaks", - "Config/autostyle/strict": "false", "Config/Needs/website": "tidyverse/tidytemplate", + "Config/testthat/edition": "3", + "Encoding": "UTF-8", + "RoxygenNote": "7.3.3.9000", "NeedsCompilation": "no", - "Author": "Kirill Müller [aut, cre] (), R Consortium [fnd], RStudio [fnd]", + "Author": "Kirill Müller [aut, cre] (ORCID: ), R Consortium [fnd], Posit Software, PBC [fnd] (ROR: )", "Maintainer": "Kirill Müller ", "Repository": "CRAN" }, @@ -3657,7 +3426,7 @@ "NeedsCompilation": "no", "Author": "Sam Firke [aut, cre], Bill Denney [ctb], Chris Haid [ctb], Ryan Knight [ctb], Malte Grosser [ctb], Jonathan Zadra [ctb]", "Maintainer": "Sam Firke ", - "Repository": "CRAN" + "Repository": "RSPM" }, "jquerylib": { "Package": "jquerylib", @@ -3885,7 +3654,7 @@ }, "lavaan": { "Package": "lavaan", - "Version": "0.6-19", + "Version": "0.6-20", "Source": "Repository", "Title": "Latent Variable Analysis", "Authors@R": "c(person(given = \"Yves\", family = \"Rosseel\", role = c(\"aut\", \"cre\"), email = \"Yves.Rosseel@UGent.be\", comment = c(ORCID = \"0000-0002-4129-4477\")), person(given = c(\"Terrence\",\"D.\"), family = \"Jorgensen\", role = \"aut\", email = \"TJorgensen314@gmail.com\", comment = c(ORCID = \"0000-0001-5111-6773\")), person(given = c(\"Luc\"), family = \"De Wilde\", role = \"aut\", email = \"Luc.DeWilde@UGent.be\"), person(given = \"Daniel\", family = \"Oberski\", role = \"ctb\", email = \"daniel.oberski@gmail.com\"), person(given = \"Jarrett\", family = \"Byrnes\", role = \"ctb\", email = \"byrnes@nceas.ucsb.edu\"), person(given = \"Leonard\", family = \"Vanbrabant\", role = \"ctb\", email = \"info@restriktor.org\"), person(given = \"Victoria\", family = \"Savalei\", role = \"ctb\", email = \"vsavalei@ubc.ca\"), person(given = \"Ed\", family = \"Merkle\", role = \"ctb\", email = \"merklee@missouri.edu\"), person(given = \"Michael\", family = \"Hallquist\", role = \"ctb\", email = \"michael.hallquist@gmail.com\"), person(given = \"Mijke\", family = \"Rhemtulla\", role = \"ctb\", email = \"mrhemtulla@ucdavis.edu\"), person(given = \"Myrsini\", family = \"Katsikatsou\", role = \"ctb\", email = \"mirtok2@gmail.com\"), person(given = \"Mariska\", family = \"Barendse\", role = \"ctb\", email = \"m.t.barendse@gmail.com\"), person(given = c(\"Nicholas\"), family = \"Rockwood\", role = \"ctb\", email = \"nrockwood@rti.org\"), person(given = \"Florian\", family = \"Scharf\", role = \"ctb\", email = \"florian.scharf@uni-leipzig.de\"), person(given = \"Han\", family = \"Du\", role = \"ctb\", email = \"hdu@psych.ucla.edu\"), person(given = \"Haziq\", family = \"Jamil\", role = \"ctb\", email = \"haziq.jamil@ubd.edu.bn\", comment = c(ORCID = \"0000-0003-3298-1010\")), person(given = \"Franz\", family = \"Classe\", role = \"ctb\", email = \"classe@dji.de\") )", @@ -3911,9 +3680,9 @@ "ByteCompile": "true", "URL": "https://lavaan.ugent.be", "NeedsCompilation": "no", - "Author": "Yves Rosseel [aut, cre] (), Terrence D. Jorgensen [aut] (), Luc De Wilde [aut], Daniel Oberski [ctb], Jarrett Byrnes [ctb], Leonard Vanbrabant [ctb], Victoria Savalei [ctb], Ed Merkle [ctb], Michael Hallquist [ctb], Mijke Rhemtulla [ctb], Myrsini Katsikatsou [ctb], Mariska Barendse [ctb], Nicholas Rockwood [ctb], Florian Scharf [ctb], Han Du [ctb], Haziq Jamil [ctb] (), Franz Classe [ctb]", + "Author": "Yves Rosseel [aut, cre] (ORCID: ), Terrence D. Jorgensen [aut] (ORCID: ), Luc De Wilde [aut], Daniel Oberski [ctb], Jarrett Byrnes [ctb], Leonard Vanbrabant [ctb], Victoria Savalei [ctb], Ed Merkle [ctb], Michael Hallquist [ctb], Mijke Rhemtulla [ctb], Myrsini Katsikatsou [ctb], Mariska Barendse [ctb], Nicholas Rockwood [ctb], Florian Scharf [ctb], Han Du [ctb], Haziq Jamil [ctb] (ORCID: ), Franz Classe [ctb]", "Maintainer": "Yves Rosseel ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "lifecycle": { @@ -3959,7 +3728,7 @@ }, "listenv": { "Package": "listenv", - "Version": "0.9.1", + "Version": "0.10.0", "Source": "Repository", "Depends": [ "R (>= 3.1.2)" @@ -3975,13 +3744,13 @@ "Description": "List environments are environments that have list-like properties. For instance, the elements of a list environment are ordered and can be accessed and iterated over using index subsetting, e.g. 'x <- listenv(a = 1, b = 2); for (i in seq_along(x)) x[[i]] <- x[[i]] ^ 2; y <- as.list(x)'.", "License": "LGPL (>= 2.1)", "LazyLoad": "TRUE", - "URL": "https://listenv.futureverse.org, https://github.com/HenrikBengtsson/listenv", - "BugReports": "https://github.com/HenrikBengtsson/listenv/issues", - "RoxygenNote": "7.3.1", + "URL": "https://listenv.futureverse.org, https://github.com/futureverse/listenv", + "BugReports": "https://github.com/futureverse/listenv/issues", + "RoxygenNote": "7.3.3", "NeedsCompilation": "no", "Author": "Henrik Bengtsson [aut, cre, cph]", "Maintainer": "Henrik Bengtsson ", - "Repository": "RSPM", + "Repository": "CRAN", "Encoding": "UTF-8" }, "lme4": { @@ -4128,7 +3897,7 @@ "NeedsCompilation": "no", "Author": "Adam Loy [aut, cre] (), Spenser Steele [aut], Jenna Korobova [aut]", "Maintainer": "Adam Loy ", - "Repository": "CRAN" + "Repository": "RSPM" }, "lmtest": { "Package": "lmtest", @@ -4239,11 +4008,11 @@ }, "magrittr": { "Package": "magrittr", - "Version": "2.0.3", + "Version": "2.0.4", "Source": "Repository", "Type": "Package", "Title": "A Forward-Pipe Operator for R", - "Authors@R": "c( person(\"Stefan Milton\", \"Bache\", , \"stefan@stefanbache.dk\", role = c(\"aut\", \"cph\"), comment = \"Original author and creator of magrittr\"), person(\"Hadley\", \"Wickham\", , \"hadley@rstudio.com\", role = \"aut\"), person(\"Lionel\", \"Henry\", , \"lionel@rstudio.com\", role = \"cre\"), person(\"RStudio\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Stefan Milton\", \"Bache\", , \"stefan@stefanbache.dk\", role = c(\"aut\", \"cph\"), comment = \"Original author and creator of magrittr\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Lionel\", \"Henry\", , \"lionel@posit.co\", role = \"cre\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Description": "Provides a mechanism for chaining commands with a new forward-pipe operator, %>%. This operator will forward a value, or the result of an expression, into the next function call/expression. There is flexible support for the type of right-hand side expressions. For more information, see package vignette. To quote Rene Magritte, \"Ceci n'est pas un pipe.\"", "License": "MIT + file LICENSE", "URL": "https://magrittr.tidyverse.org, https://github.com/tidyverse/magrittr", @@ -4262,10 +4031,10 @@ "ByteCompile": "Yes", "Config/Needs/website": "tidyverse/tidytemplate", "Encoding": "UTF-8", - "RoxygenNote": "7.1.2", + "RoxygenNote": "7.3.3", "NeedsCompilation": "yes", - "Author": "Stefan Milton Bache [aut, cph] (Original author and creator of magrittr), Hadley Wickham [aut], Lionel Henry [cre], RStudio [cph, fnd]", - "Maintainer": "Lionel Henry ", + "Author": "Stefan Milton Bache [aut, cph] (Original author and creator of magrittr), Hadley Wickham [aut], Lionel Henry [cre], Posit Software, PBC [cph, fnd] (ROR: )", + "Maintainer": "Lionel Henry ", "Repository": "CRAN" }, "mathjaxr": { @@ -4287,16 +4056,16 @@ "NeedsCompilation": "yes", "Author": "Wolfgang Viechtbauer [aut, cre] ()", "Maintainer": "Wolfgang Viechtbauer ", - "Repository": "CRAN" + "Repository": "RSPM" }, "mclust": { "Package": "mclust", - "Version": "6.1.1", + "Version": "6.1.2", "Source": "Repository", - "Date": "2024-04-29", + "Date": "2025-10-30", "Title": "Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation", "Description": "Gaussian finite mixture models fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization, dimension reduction for visualisation, and resampling-based inference.", - "Authors@R": "c(person(\"Chris\", \"Fraley\", role = \"aut\"), person(\"Adrian E.\", \"Raftery\", role = \"aut\", comment = c(ORCID = \"0000-0002-6589-301X\")), person(\"Luca\", \"Scrucca\", role = c(\"aut\", \"cre\"), email = \"luca.scrucca@unipg.it\", comment = c(ORCID = \"0000-0003-3826-0484\")), person(\"Thomas Brendan\", \"Murphy\", role = \"ctb\", comment = c(ORCID = \"0000-0002-5668-7046\")), person(\"Michael\", \"Fop\", role = \"ctb\", comment = c(ORCID = \"0000-0003-3936-2757\")))", + "Authors@R": "c(person(\"Chris\", \"Fraley\", role = \"aut\"), person(\"Adrian E.\", \"Raftery\", role = \"aut\", comment = c(ORCID = \"0000-0002-6589-301X\")), person(\"Luca\", \"Scrucca\", role = c(\"aut\", \"cre\"), email = \"luca.scrucca@unibo.it\", comment = c(ORCID = \"0000-0003-3826-0484\")), person(\"Thomas Brendan\", \"Murphy\", role = \"ctb\", comment = c(ORCID = \"0000-0002-5668-7046\")), person(\"Michael\", \"Fop\", role = \"ctb\", comment = c(ORCID = \"0000-0003-3936-2757\")))", "Depends": [ "R (>= 3.0)" ], @@ -4316,13 +4085,13 @@ "License": "GPL (>= 2)", "URL": "https://mclust-org.github.io/mclust/", "VignetteBuilder": "knitr", - "Repository": "CRAN", + "Repository": "RSPM", "ByteCompile": "true", "NeedsCompilation": "yes", "LazyData": "yes", "Encoding": "UTF-8", - "Author": "Chris Fraley [aut], Adrian E. Raftery [aut] (), Luca Scrucca [aut, cre] (), Thomas Brendan Murphy [ctb] (), Michael Fop [ctb] ()", - "Maintainer": "Luca Scrucca " + "Author": "Chris Fraley [aut], Adrian E. Raftery [aut] (ORCID: ), Luca Scrucca [aut, cre] (ORCID: ), Thomas Brendan Murphy [ctb] (ORCID: ), Michael Fop [ctb] (ORCID: )", + "Maintainer": "Luca Scrucca " }, "memoise": { "Package": "memoise", @@ -4385,7 +4154,7 @@ "lmeInfo" ], "Author": "Ting Wang [aut, cre], Edgar Merkle [aut] (ORCID: ), Yves Rosseel [ctb]", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "metadat": { @@ -4434,7 +4203,7 @@ "NeedsCompilation": "no", "Author": "Wolfgang Viechtbauer [aut, cre] (), Thomas White [aut] (), Daniel Noble [aut] (), Alistair Senior [aut] (), W. Kyle Hamilton [aut] (), Guido Schwarzer [dtc] ()", "Maintainer": "Wolfgang Viechtbauer ", - "Repository": "CRAN" + "Repository": "RSPM" }, "metafor": { "Package": "metafor", @@ -4515,7 +4284,7 @@ "NeedsCompilation": "no", "Author": "Wolfgang Viechtbauer [aut, cre] ()", "Maintainer": "Wolfgang Viechtbauer ", - "Repository": "CRAN" + "Repository": "RSPM" }, "mgcv": { "Package": "mgcv", @@ -4577,7 +4346,7 @@ "NeedsCompilation": "yes", "Author": "Olaf Mersmann [aut], Claudia Beleites [ctb], Rainer Hurling [ctb], Ari Friedman [ctb], Joshua M. Ulrich [cre]", "Maintainer": "Joshua M. Ulrich ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "mime": { @@ -4676,7 +4445,7 @@ "NeedsCompilation": "no", "Author": "Brian T. Keller [aut, cre, cph]", "Maintainer": "Brian T. Keller ", - "Repository": "CRAN" + "Repository": "RSPM" }, "mnormt": { "Package": "mnormt", @@ -4769,7 +4538,7 @@ "NeedsCompilation": "yes", "Author": "Christopher Jackson [aut, cre]", "Maintainer": "Christopher Jackson ", - "Repository": "CRAN" + "Repository": "RSPM" }, "mvtnorm": { "Package": "mvtnorm", @@ -4794,7 +4563,7 @@ "NeedsCompilation": "yes", "Author": "Alan Genz [aut], Frank Bretz [aut], Tetsuhisa Miwa [aut], Xuefei Mi [aut], Friedrich Leisch [ctb], Fabian Scheipl [ctb], Bjoern Bornkamp [ctb] (), Martin Maechler [ctb] (), Torsten Hothorn [aut, cre] ()", "Maintainer": "Torsten Hothorn ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "nlme": { @@ -4833,18 +4602,14 @@ }, "nlmeU": { "Package": "nlmeU", - "Version": "0.70-9", + "Version": "0.71.7", "Source": "Repository", - "Date": "2022-05-02", - "Author": "Andrzej Galecki agalecki@umich.edu, Tomasz Burzykowski tomasz.burzykowski@uhasselt.be", - "Maintainer": "Andrzej Galecki ", - "Title": "Datasets and Utility Functions Enhancing Functionality of 'nlme' Package", - "Description": "Datasets and utility functions enhancing functionality of nlme package. Datasets, functions and scripts are described in book titled 'Linear Mixed-Effects Models: A Step-by-Step Approach' by Galecki and Burzykowski (2013). Package is under development.", - "Depends": [ - "R (>= 2.14.2)" - ], + "Title": "Functions and Data Supporting 'Linear Mixed-Effects Models: A Step-by-Step Approach'", + "Description": "Provides functions and datasets to support the book by Galecki and Burzykowski (2013), 'Linear Mixed-Effects Models: A Step-by-Step Approach', Springer. Includes functions for power calculations, log-likelihood contributions, and data simulation for linear mixed-effects models.", + "Authors@R": "c( person(\"Andrzej T.\", \"Galecki\", email = \"agalecki@umich.edu\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-1542-4001\")), person(\"Tomasz\", \"Burzykowski\", email = \"tomasz.burzykowski@uhasselt.be\", role = \"aut\", comment = c(ORCID = \"0000-0003-3378-975X\")) )", "Imports": [ - "nlme" + "nlme", + "stats" ], "Suggests": [ "reshape", @@ -4854,13 +4619,17 @@ "roxygen2", "testthat" ], - "License": "GPL (>= 2)", - "URL": "http://www-personal.umich.edu/~agalecki/", - "LazyData": "yes", - "Collate": "'logLik1.R' 'nlmeU-package.R' 'Pwr.R' 'simulateY.R' 'varia.R'", + "Encoding": "UTF-8", + "RoxygenNote": "7.3.2", + "License": "GPL-2", + "URL": "https://github.com/agalecki/nlmeU", + "Depends": [ + "R (>= 3.5.0)" + ], "NeedsCompilation": "no", - "Repository": "CRAN", - "Encoding": "UTF-8" + "Author": "Andrzej T. Galecki [aut, cre] (ORCID: ), Tomasz Burzykowski [aut] (ORCID: )", + "Maintainer": "Andrzej T. Galecki ", + "Repository": "RSPM" }, "nloptr": { "Package": "nloptr", @@ -4889,32 +4658,6 @@ "Maintainer": "Aymeric Stamm ", "Repository": "CRAN" }, - "nnet": { - "Package": "nnet", - "Version": "7.3-19", - "Source": "Repository", - "Priority": "recommended", - "Date": "2023-05-02", - "Depends": [ - "R (>= 3.0.0)", - "stats", - "utils" - ], - "Suggests": [ - "MASS" - ], - "Authors@R": "c(person(\"Brian\", \"Ripley\", role = c(\"aut\", \"cre\", \"cph\"), email = \"ripley@stats.ox.ac.uk\"), person(\"William\", \"Venables\", role = \"cph\"))", - "Description": "Software for feed-forward neural networks with a single hidden layer, and for multinomial log-linear models.", - "Title": "Feed-Forward Neural Networks and Multinomial Log-Linear Models", - "ByteCompile": "yes", - "License": "GPL-2 | GPL-3", - "URL": "http://www.stats.ox.ac.uk/pub/MASS4/", - "NeedsCompilation": "yes", - "Author": "Brian Ripley [aut, cre, cph], William Venables [cph]", - "Maintainer": "Brian Ripley ", - "Repository": "RSPM", - "Encoding": "UTF-8" - }, "nonnest2": { "Package": "nonnest2", "Version": "0.5-8", @@ -4955,7 +4698,7 @@ "NeedsCompilation": "no", "Author": "Edgar Merkle [aut, cre], Dongjun You [aut], Lennart Schneider [ctb], Mauricio Garnier-Villarreal [ctb], Seongho Bae [ctb], Phil Chalmers [ctb]", "Maintainer": "Edgar Merkle ", - "Repository": "CRAN" + "Repository": "RSPM" }, "npde": { "Package": "npde", @@ -4983,7 +4726,7 @@ "Collate": "'NpdeSimData.R' 'NpdeData.R' 'aaa_generics.R' 'NpdeData-methods.R' 'NpdeRes.R' 'NpdeRes-methods.R' 'NpdeObject.R' 'NpdeObject-methods.R' 'compute_distribution.R' 'compute_npde.R' 'compute_pd.R' 'compute_ploq.R' 'mainNpde.R' 'npde.R' 'npdeControl.R' 'plotNpde-auxDistPlot.R' 'plotNpde-auxScatter.R' 'plotNpde-auxScatterPlot.R' 'plotNpde-binningPI.R' 'plotNpde-covplot.R' 'plotNpde-distributionPlot.R' 'plotNpde-methods.R' 'plotNpde-plotFunctions.R' 'plotNpde-scatterplot.R'", "NeedsCompilation": "no", "Author": "Emmanuelle Comets [aut, cre] (), Karl Brendel [ctb], Thi Huyen Tram Nguyen [ctb], Marc Cerou [ctb], Romain Leroux [ctb], France Mentre [ctb]", - "Repository": "CRAN" + "Repository": "RSPM" }, "numDeriv": { "Package": "numDeriv", @@ -5007,7 +4750,7 @@ }, "openssl": { "Package": "openssl", - "Version": "2.3.3", + "Version": "2.3.4", "Source": "Repository", "Type": "Package", "Title": "Toolkit for Encryption, Signatures and Certificates Based on OpenSSL", @@ -5096,7 +4839,7 @@ "NeedsCompilation": "no", "Author": "Tyler Rinker [aut, cre, ctb], Dason Kurkiewicz [aut, ctb], Keith Hughitt [ctb], Albert Wang [ctb], Garrick Aden-Buie [ctb], Albert Wang [ctb], Lukas Burk [ctb]", "Maintainer": "Tyler Rinker ", - "Repository": "RSPM", + "Repository": "CRAN", "Encoding": "UTF-8" }, "parallelly": { @@ -5155,7 +4898,7 @@ "BugReports": "https://github.com/psolymos/pbapply/issues", "NeedsCompilation": "no", "Author": "Peter Solymos [aut, cre] (ORCID: ), Zygmunt Zawadzki [aut], Henrik Bengtsson [ctb], R Core Team [cph, ctb]", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "pbivnorm": { @@ -5170,49 +4913,12 @@ "License": "GPL (>= 2)", "URL": "https://github.com/brentonk/pbivnorm", "NeedsCompilation": "yes", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, - "pbkrtest": { - "Package": "pbkrtest", - "Version": "0.5.5", - "Source": "Repository", - "Title": "Parametric Bootstrap, Kenward-Roger and Satterthwaite Based Methods for Test in Mixed Models", - "Authors@R": "c( person(given = \"Ulrich\", family = \"Halekoh\", email = \"uhalekoh@health.sdu.dk\", role = c(\"aut\", \"cph\")), person(given = \"Søren\", family = \"Højsgaard\", email = \"sorenh@math.aau.dk\", role = c(\"aut\", \"cre\", \"cph\")) )", - "Maintainer": "Søren Højsgaard ", - "Description": "Computes p-values based on (a) Satterthwaite or Kenward-Rogers degree of freedom methods and (b) parametric bootstrap for mixed effects models as implemented in the 'lme4' package. Implements parametric bootstrap test for generalized linear mixed models as implemented in 'lme4' and generalized linear models. The package is documented in the paper by Halekoh and Højsgaard, (2012, ). Please see 'citation(\"pbkrtest\")' for citation details.", - "URL": "https://people.math.aau.dk/~sorenh/software/pbkrtest/", - "Depends": [ - "R (>= 4.2.0)", - "lme4 (>= 1.1.31)" - ], - "Imports": [ - "broom", - "dplyr", - "MASS", - "methods", - "numDeriv", - "Matrix (>= 1.2.3)", - "doBy (>= 4.6.22)" - ], - "Suggests": [ - "nlme", - "markdown", - "knitr" - ], - "Encoding": "UTF-8", - "VignetteBuilder": "knitr", - "License": "GPL (>= 2)", - "ByteCompile": "Yes", - "RoxygenNote": "7.3.2", - "LazyData": "true", - "NeedsCompilation": "no", - "Author": "Ulrich Halekoh [aut, cph], Søren Højsgaard [aut, cre, cph]", - "Repository": "CRAN" - }, "pillar": { "Package": "pillar", - "Version": "1.11.0", + "Version": "1.11.1", "Source": "Repository", "Title": "Coloured Formatting for Columns", "Authors@R": "c(person(given = \"Kirill\", family = \"M\\u00fcller\", role = c(\"aut\", \"cre\"), email = \"kirill@cynkra.com\", comment = c(ORCID = \"0000-0002-1416-3412\")), person(given = \"Hadley\", family = \"Wickham\", role = \"aut\"), person(given = \"RStudio\", role = \"cph\"))", @@ -5254,7 +4960,7 @@ ], "VignetteBuilder": "knitr", "Encoding": "UTF-8", - "RoxygenNote": "7.3.2.9000", + "RoxygenNote": "7.3.3.9000", "Config/testthat/edition": "3", "Config/testthat/parallel": "true", "Config/testthat/start-first": "format_multi_fuzz, format_multi_fuzz_2, format_multi, ctl_colonnade, ctl_colonnade_1, ctl_colonnade_2", @@ -5269,10 +4975,10 @@ }, "pkgbuild": { "Package": "pkgbuild", - "Version": "1.4.4", + "Version": "1.4.8", "Source": "Repository", "Title": "Find Tools Needed to Build R Packages", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\"), person(\"Gábor\", \"Csárdi\", , \"csardi.gabor@gmail.com\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Hadley\", \"Wickham\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\"), person(\"Gábor\", \"Csárdi\", , \"csardi.gabor@gmail.com\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Description": "Provides functions used to build R packages. Locates compilers needed to build R packages on various platforms and ensures the PATH is configured appropriately so R can use them.", "License": "MIT + file LICENSE", "URL": "https://github.com/r-lib/pkgbuild, https://pkgbuild.r-lib.org", @@ -5291,20 +4997,20 @@ "covr", "cpp11", "knitr", - "mockery", "Rcpp", "rmarkdown", - "testthat (>= 3.0.0)", + "testthat (>= 3.2.0)", "withr (>= 2.3.0)" ], "Config/Needs/website": "tidyverse/tidytemplate", "Config/testthat/edition": "3", + "Config/usethis/last-upkeep": "2025-04-30", "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2", "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut], Jim Hester [aut], Gábor Csárdi [aut, cre], Posit Software, PBC [cph, fnd]", + "Author": "Hadley Wickham [aut], Jim Hester [aut], Gábor Csárdi [aut, cre], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Gábor Csárdi ", - "Repository": "RSPM" + "Repository": "CRAN" }, "pkgconfig": { "Package": "pkgconfig", @@ -5332,12 +5038,12 @@ }, "pkgload": { "Package": "pkgload", - "Version": "1.3.4", + "Version": "1.4.1", "Source": "Repository", "Title": "Simulate Package Installation and Attach", "Authors@R": "c( person(\"Hadley\", \"Wickham\", role = \"aut\"), person(\"Winston\", \"Chang\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\"), person(\"Lionel\", \"Henry\", , \"lionel@posit.co\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(\"R Core team\", role = \"ctb\", comment = \"Some namespace and vignette code extracted from base R\") )", "Description": "Simulates the process of installing a package and then attaching it. This is a key part of the 'devtools' package as it allows you to rapidly iterate while developing a package.", - "License": "GPL-3", + "License": "MIT + file LICENSE", "URL": "https://github.com/r-lib/pkgload, https://pkgload.r-lib.org", "BugReports": "https://github.com/r-lib/pkgload/issues", "Depends": [ @@ -5345,38 +5051,39 @@ ], "Imports": [ "cli (>= 3.3.0)", - "crayon", "desc", "fs", "glue", + "lifecycle", "methods", "pkgbuild", + "processx", "rlang (>= 1.1.1)", "rprojroot", - "utils", - "withr (>= 2.4.3)" + "utils" ], "Suggests": [ "bitops", - "covr", + "jsonlite", "mathjaxr", - "mockr", "pak", "Rcpp", "remotes", "rstudioapi", - "testthat (>= 3.1.0)" + "testthat (>= 3.2.1.1)", + "usethis", + "withr" ], "Config/Needs/website": "tidyverse/tidytemplate, ggplot2", "Config/testthat/edition": "3", "Config/testthat/parallel": "TRUE", "Config/testthat/start-first": "dll", "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.2", "NeedsCompilation": "no", "Author": "Hadley Wickham [aut], Winston Chang [aut], Jim Hester [aut], Lionel Henry [aut, cre], Posit Software, PBC [cph, fnd], R Core team [ctb] (Some namespace and vignette code extracted from base R)", "Maintainer": "Lionel Henry ", - "Repository": "RSPM" + "Repository": "CRAN" }, "plyr": { "Package": "plyr", @@ -5432,7 +5139,7 @@ ], "Collate": "'adjective.R' 'adverb.R' 'exclamation.R' 'verb.R' 'rpackage.R' 'package.R'", "NeedsCompilation": "no", - "Repository": "RSPM", + "Repository": "CRAN", "Encoding": "UTF-8" }, "prettyunits": { @@ -5635,12 +5342,12 @@ "NeedsCompilation": "no", "Author": "William Revelle [aut, cre] (ORCID: )", "Maintainer": "William Revelle ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "purrr": { "Package": "purrr", - "Version": "1.1.0", + "Version": "1.2.0", "Source": "Repository", "Title": "Functional Programming Tools", "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Lionel\", \"Henry\", , \"lionel@posit.co\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"https://ror.org/03wc8by49\")) )", @@ -5659,13 +5366,13 @@ "vctrs (>= 0.6.3)" ], "Suggests": [ - "carrier (>= 0.2.0)", + "carrier (>= 0.3.0)", "covr", "dplyr (>= 0.7.8)", "httr", "knitr", "lubridate", - "mirai (>= 2.4.0)", + "mirai (>= 2.5.1)", "rmarkdown", "testthat (>= 3.0.0)", "tibble", @@ -5681,7 +5388,7 @@ "Config/testthat/edition": "3", "Config/testthat/parallel": "TRUE", "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", + "RoxygenNote": "7.3.3", "NeedsCompilation": "yes", "Author": "Hadley Wickham [aut, cre] (ORCID: ), Lionel Henry [aut], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Hadley Wickham ", @@ -5689,10 +5396,10 @@ }, "purrrlyr": { "Package": "purrrlyr", - "Version": "0.0.8", + "Version": "0.0.10", "Source": "Repository", "Title": "Tools at the Intersection of 'purrr' and 'dplyr'", - "Authors@R": "c(person(given = \"Lionel\", family = \"Henry\", role = c(\"aut\", \"cre\"), email = \"lionel@rstudio.com\"), person(given = \"Hadley\", family = \"Wickham\", role = \"ctb\", email = \"hadley@rstudio.com\"), person(given = \"RStudio\", role = \"cph\"))", + "Authors@R": "c(person(given = \"Lionel\", family = \"Henry\", role = c(\"aut\", \"cre\"), email = \"lionel@posit.co\"), person(given = \"Hadley\", family = \"Wickham\", role = \"ctb\", email = \"hadley@posit.co\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")))", "Description": "Some functions at the intersection of 'dplyr' and 'purrr' that formerly lived in 'purrr'.", "License": "GPL-3 | file LICENSE", "URL": "https://github.com/hadley/purrrlyr", @@ -5710,13 +5417,13 @@ "LinkingTo": [ "Rcpp" ], - "SystemRequirements": "C++11", "Encoding": "UTF-8", - "RoxygenNote": "7.1.1", + "RoxygenNote": "7.3.3", "Config/testthat/edition": "3", + "Config/build/compilation-database": "true", "NeedsCompilation": "yes", - "Author": "Lionel Henry [aut, cre], Hadley Wickham [ctb], RStudio [cph]", - "Maintainer": "Lionel Henry ", + "Author": "Lionel Henry [aut, cre], Hadley Wickham [ctb], Posit Software, PBC [cph, fnd] (ROR: )", + "Maintainer": "Lionel Henry ", "Repository": "RSPM" }, "quadprog": { @@ -5745,7 +5452,7 @@ "Description": "Estimation and inference methods for models for conditional quantile functions: Linear and nonlinear parametric and non-parametric (total variation penalized) models for conditional quantiles of a univariate response and several methods for handling censored survival data. Portfolio selection methods based on expected shortfall risk are also now included. See Koenker, R. (2005) Quantile Regression, Cambridge U. Press, and Koenker, R. et al. (2017) Handbook of Quantile Regression, CRC Press, .", "Authors@R": "c( person(\"Roger\", \"Koenker\", role = c(\"cre\",\"aut\"), email = \"rkoenker@illinois.edu\"), person(\"Stephen\", \"Portnoy\", role = c(\"ctb\"), comment = \"Contributions to Censored QR code\", email = \"sportnoy@illinois.edu\"), person(c(\"Pin\", \"Tian\"), \"Ng\", role = c(\"ctb\"), comment = \"Contributions to Sparse QR code\", email = \"pin.ng@nau.edu\"), person(\"Blaise\", \"Melly\", role = c(\"ctb\"), comment = \"Contributions to preprocessing code\", email = \"mellyblaise@gmail.com\"), person(\"Achim\", \"Zeileis\", role = c(\"ctb\"), comment = \"Contributions to dynrq code essentially identical to his dynlm code\", email = \"Achim.Zeileis@uibk.ac.at\"), person(\"Philip\", \"Grosjean\", role = c(\"ctb\"), comment = \"Contributions to nlrq code\", email = \"phgrosjean@sciviews.org\"), person(\"Cleve\", \"Moler\", role = c(\"ctb\"), comment = \"author of several linpack routines\"), person(\"Yousef\", \"Saad\", role = c(\"ctb\"), comment = \"author of sparskit2\"), person(\"Victor\", \"Chernozhukov\", role = c(\"ctb\"), comment = \"contributions to extreme value inference code\"), person(\"Ivan\", \"Fernandez-Val\", role = c(\"ctb\"), comment = \"contributions to extreme value inference code\"), person(\"Martin\", \"Maechler\", role = \"ctb\", comment = c(\"tweaks (src/chlfct.f, 'tiny','Large')\", ORCID = \"0000-0002-8685-9910\")), person(c(\"Brian\", \"D\"), \"Ripley\", role = c(\"trl\",\"ctb\"), comment = \"Initial (2001) R port from S (to my everlasting shame -- how could I have been so slow to adopt R!) and for numerous other suggestions and useful advice\", email = \"ripley@stats.ox.ac.uk\"))", "Maintainer": "Roger Koenker ", - "Repository": "CRAN", + "Repository": "RSPM", "Depends": [ "R (>= 3.5)", "stats", @@ -5778,11 +5485,11 @@ }, "ragg": { "Package": "ragg", - "Version": "1.4.0", + "Version": "1.5.0", "Source": "Repository", "Type": "Package", "Title": "Graphic Devices Based on AGG", - "Authors@R": "c( person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Maxim\", \"Shemanarev\", role = c(\"aut\", \"cph\"), comment = \"Author of AGG\"), person(\"Tony\", \"Juricic\", , \"tonygeek@yahoo.com\", role = c(\"ctb\", \"cph\"), comment = \"Contributor to AGG\"), person(\"Milan\", \"Marusinec\", , \"milan@marusinec.sk\", role = c(\"ctb\", \"cph\"), comment = \"Contributor to AGG\"), person(\"Spencer\", \"Garrett\", role = \"ctb\", comment = \"Contributor to AGG\"), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Maxim\", \"Shemanarev\", role = c(\"aut\", \"cph\"), comment = \"Author of AGG\"), person(\"Tony\", \"Juricic\", , \"tonygeek@yahoo.com\", role = c(\"ctb\", \"cph\"), comment = \"Contributor to AGG\"), person(\"Milan\", \"Marusinec\", , \"milan@marusinec.sk\", role = c(\"ctb\", \"cph\"), comment = \"Contributor to AGG\"), person(\"Spencer\", \"Garrett\", role = \"ctb\", comment = \"Contributor to AGG\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Maintainer": "Thomas Lin Pedersen ", "Description": "Anti-Grain Geometry (AGG) is a high-quality and high-performance 2D drawing library. The 'ragg' package provides a set of graphic devices based on AGG to use as alternative to the raster devices provided through the 'grDevices' package.", "License": "MIT + file LICENSE", @@ -5802,14 +5509,15 @@ "systemfonts", "textshaping" ], + "Config/build/compilation-database": "true", "Config/Needs/website": "ggplot2, devoid, magick, bench, tidyr, ggridges, hexbin, sessioninfo, pkgdown, tidyverse/tidytemplate", + "Config/testthat/edition": "3", + "Config/usethis/last-upkeep": "2025-04-25", "Encoding": "UTF-8", "RoxygenNote": "7.3.2", - "SystemRequirements": "freetype2, libpng, libtiff, libjpeg", - "Config/testthat/edition": "3", - "Config/build/compilation-database": "true", + "SystemRequirements": "freetype2, libpng, libtiff, libjpeg, libwebp, libwebpmux", "NeedsCompilation": "yes", - "Author": "Thomas Lin Pedersen [cre, aut] (), Maxim Shemanarev [aut, cph] (Author of AGG), Tony Juricic [ctb, cph] (Contributor to AGG), Milan Marusinec [ctb, cph] (Contributor to AGG), Spencer Garrett [ctb] (Contributor to AGG), Posit, PBC [cph, fnd]", + "Author": "Thomas Lin Pedersen [cre, aut] (ORCID: ), Maxim Shemanarev [aut, cph] (Author of AGG), Tony Juricic [ctb, cph] (Contributor to AGG), Milan Marusinec [ctb, cph] (Contributor to AGG), Spencer Garrett [ctb] (Contributor to AGG), Posit Software, PBC [cph, fnd] (ROR: )", "Repository": "CRAN" }, "rappdirs": { @@ -5967,7 +5675,7 @@ }, "reformulas": { "Package": "reformulas", - "Version": "0.4.1", + "Version": "0.4.2", "Source": "Repository", "Title": "Machinery for Processing Random Effect Formulas", "Authors@R": "person(given = \"Ben\", family = \"Bolker\", role = c(\"aut\", \"cre\"), email = \"bolker@mcmaster.ca\", comment=c(ORCID=\"0000-0002-2127-0443\"))", @@ -6222,7 +5930,7 @@ "LazyData": "true", "RoxygenNote": "7.1.0", "NeedsCompilation": "yes", - "Repository": "CRAN" + "Repository": "RSPM" }, "rlang": { "Package": "rlang", @@ -6277,7 +5985,7 @@ }, "rmarkdown": { "Package": "rmarkdown", - "Version": "2.29", + "Version": "2.30", "Source": "Repository", "Type": "Package", "Title": "Dynamic Documents for R", @@ -6327,9 +6035,9 @@ "RoxygenNote": "7.3.2", "SystemRequirements": "pandoc (>= 1.14) - http://pandoc.org", "NeedsCompilation": "no", - "Author": "JJ Allaire [aut], Yihui Xie [aut, cre] (), Christophe Dervieux [aut] (), Jonathan McPherson [aut], Javier Luraschi [aut], Kevin Ushey [aut], Aron Atkins [aut], Hadley Wickham [aut], Joe Cheng [aut], Winston Chang [aut], Richard Iannone [aut] (), Andrew Dunning [ctb] (), Atsushi Yasumoto [ctb, cph] (, Number sections Lua filter), Barret Schloerke [ctb], Carson Sievert [ctb] (), Devon Ryan [ctb] (), Frederik Aust [ctb] (), Jeff Allen [ctb], JooYoung Seo [ctb] (), Malcolm Barrett [ctb], Rob Hyndman [ctb], Romain Lesur [ctb], Roy Storey [ctb], Ruben Arslan [ctb], Sergio Oller [ctb], Posit Software, PBC [cph, fnd], jQuery UI contributors [ctb, cph] (jQuery UI library; authors listed in inst/rmd/h/jqueryui/AUTHORS.txt), Mark Otto [ctb] (Bootstrap library), Jacob Thornton [ctb] (Bootstrap library), Bootstrap contributors [ctb] (Bootstrap library), Twitter, Inc [cph] (Bootstrap library), Alexander Farkas [ctb, cph] (html5shiv library), Scott Jehl [ctb, cph] (Respond.js library), Ivan Sagalaev [ctb, cph] (highlight.js library), Greg Franko [ctb, cph] (tocify library), John MacFarlane [ctb, cph] (Pandoc templates), Google, Inc. [ctb, cph] (ioslides library), Dave Raggett [ctb] (slidy library), W3C [cph] (slidy library), Dave Gandy [ctb, cph] (Font-Awesome), Ben Sperry [ctb] (Ionicons), Drifty [cph] (Ionicons), Aidan Lister [ctb, cph] (jQuery StickyTabs), Benct Philip Jonsson [ctb, cph] (pagebreak Lua filter), Albert Krewinkel [ctb, cph] (pagebreak Lua filter)", + "Author": "JJ Allaire [aut], Yihui Xie [aut, cre] (ORCID: ), Christophe Dervieux [aut] (ORCID: ), Jonathan McPherson [aut], Javier Luraschi [aut], Kevin Ushey [aut], Aron Atkins [aut], Hadley Wickham [aut], Joe Cheng [aut], Winston Chang [aut], Richard Iannone [aut] (ORCID: ), Andrew Dunning [ctb] (ORCID: ), Atsushi Yasumoto [ctb, cph] (ORCID: , cph: Number sections Lua filter), Barret Schloerke [ctb], Carson Sievert [ctb] (ORCID: ), Devon Ryan [ctb] (ORCID: ), Frederik Aust [ctb] (ORCID: ), Jeff Allen [ctb], JooYoung Seo [ctb] (ORCID: ), Malcolm Barrett [ctb], Rob Hyndman [ctb], Romain Lesur [ctb], Roy Storey [ctb], Ruben Arslan [ctb], Sergio Oller [ctb], Posit Software, PBC [cph, fnd], jQuery UI contributors [ctb, cph] (jQuery UI library; authors listed in inst/rmd/h/jqueryui/AUTHORS.txt), Mark Otto [ctb] (Bootstrap library), Jacob Thornton [ctb] (Bootstrap library), Bootstrap contributors [ctb] (Bootstrap library), Twitter, Inc [cph] (Bootstrap library), Alexander Farkas [ctb, cph] (html5shiv library), Scott Jehl [ctb, cph] (Respond.js library), Ivan Sagalaev [ctb, cph] (highlight.js library), Greg Franko [ctb, cph] (tocify library), John MacFarlane [ctb, cph] (Pandoc templates), Google, Inc. [ctb, cph] (ioslides library), Dave Raggett [ctb] (slidy library), W3C [cph] (slidy library), Dave Gandy [ctb, cph] (Font-Awesome), Ben Sperry [ctb] (Ionicons), Drifty [cph] (Ionicons), Aidan Lister [ctb, cph] (jQuery StickyTabs), Benct Philip Jonsson [ctb, cph] (pagebreak Lua filter), Albert Krewinkel [ctb, cph] (pagebreak Lua filter)", "Maintainer": "Yihui Xie ", - "Repository": "RSPM" + "Repository": "CRAN" }, "rpart": { "Package": "rpart", @@ -6379,12 +6087,12 @@ "LazyData": "yes", "URL": "http://www.milbo.org/rpart-plot/index.html", "NeedsCompilation": "no", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "rprojroot": { "Package": "rprojroot", - "Version": "2.1.0", + "Version": "2.1.1", "Source": "Repository", "Title": "Finding Files in Project Subdirectories", "Authors@R": "person(given = \"Kirill\", family = \"M\\u00fcller\", role = c(\"aut\", \"cre\"), email = \"kirill@cynkra.com\", comment = c(ORCID = \"0000-0002-1416-3412\"))", @@ -6443,16 +6151,16 @@ }, "rvest": { "Package": "rvest", - "Version": "1.0.4", + "Version": "1.0.5", "Source": "Repository", "Title": "Easily Harvest (Scrape) Web Pages", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", + "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", "Description": "Wrappers around the 'xml2' and 'httr' packages to make it easy to download, then manipulate, HTML and XML.", "License": "MIT + file LICENSE", "URL": "https://rvest.tidyverse.org/, https://github.com/tidyverse/rvest", "BugReports": "https://github.com/tidyverse/rvest/issues", "Depends": [ - "R (>= 3.6)" + "R (>= 4.1)" ], "Imports": [ "cli", @@ -6463,12 +6171,13 @@ "rlang (>= 1.1.0)", "selectr", "tibble", - "xml2 (>= 1.3)" + "xml2 (>= 1.4.0)" ], "Suggests": [ "chromote", "covr", "knitr", + "purrr", "R6", "readr", "repurrrsive", @@ -6476,6 +6185,7 @@ "spelling", "stringi (>= 0.3.1)", "testthat (>= 3.0.2)", + "tidyr", "webfakes" ], "VignetteBuilder": "knitr", @@ -6484,19 +6194,19 @@ "Config/testthat/parallel": "true", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.3.1", + "RoxygenNote": "7.3.2", "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd]", + "Author": "Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, "saemix": { "Package": "saemix", - "Version": "3.3", + "Version": "3.4", "Source": "Repository", "Title": "Stochastic Approximation Expectation Maximization (SAEM) Algorithm", - "Authors@R": "c( person(\"Emmanuelle\", \"Comets\", role = c(\"aut\", \"cre\"), email = \"emmanuelle.comets@inserm.fr\"), person(\"Audrey\", \"Lavenu\", role = \"aut\"), person(\"Marc\", \"Lavielle\", role = \"aut\"), person(\"Belhal\", \"Karimi\", role = \"aut\"), person(\"Maud\", \"Delattre\", role = \"ctb\"), person(\"Marilou\", \"Chanel\", role = \"ctb\"), person(\"Johannes\", \"Ranke\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4371-6538\")), person(\"Sofia\", \"Kaisaridi\", role = \"ctb\"), person(\"Lucie\", \"Fayette\", role = \"ctb\"))", - "Description": "The 'saemix' package implements the Stochastic Approximation EM algorithm for parameter estimation in (non)linear mixed effects models. The SAEM algorithm (i) computes the maximum likelihood estimator of the population parameters, without any approximation of the model (linearisation, quadrature approximation,...), using the Stochastic Approximation Expectation Maximization (SAEM) algorithm, (ii) provides standard errors for the maximum likelihood estimator (iii) estimates the conditional modes, the conditional means and the conditional standard deviations of the individual parameters, using the Hastings-Metropolis algorithm (see Comets et al. (2017) ). Many applications of SAEM in agronomy, animal breeding and PKPD analysis have been published by members of the Monolix group. The full PDF documentation for the package including references about the algorithm and examples can be downloaded on the github of the IAME research institute for 'saemix': .", + "Authors@R": "c( person(\"Emmanuelle\", \"Comets\", role = c(\"aut\", \"cre\"), email = \"emmanuelle.comets@inserm.fr\"), person(\"Audrey\", \"Lavenu\", role = \"aut\"), person(\"Marc\", \"Lavielle\", role = \"aut\"), person(\"Belhal\", \"Karimi\", role = \"aut\"), person(\"Maud\", \"Delattre\", role = \"ctb\"), person(\"Alexandra\", \"Lavalley-Morelle\", role = \"ctb\"), person(\"Marilou\", \"Chanel\", role = \"ctb\"), person(\"Johannes\", \"Ranke\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4371-6538\")), person(\"Sofia\", \"Kaisaridi\", role = \"ctb\"), person(\"Lucie\", \"Fayette\", role = \"ctb\"))", + "Description": "The 'saemix' package implements the Stochastic Approximation EM algorithm for parameter estimation in (non)linear mixed effects models. It (i) computes the maximum likelihood estimator of the population parameters, without any approximation of the model (linearisation, quadrature approximation,...), using the Stochastic Approximation Expectation Maximization (SAEM) algorithm, (ii) provides standard errors for the maximum likelihood estimator (iii) estimates the conditional modes, the conditional means and the conditional standard deviations of the individual parameters, using the Hastings-Metropolis algorithm (see Comets et al. (2017) ). Many applications of SAEM in agronomy, animal breeding and PKPD analysis have been published by members of the Monolix group. The full PDF documentation for the package including references about the algorithm and examples can be downloaded on the github of the IAME research institute for 'saemix': .", "License": "GPL (>= 2)", "LazyLoad": "yes", "LazyData": "yes", @@ -6520,12 +6230,12 @@ "npde (>= 3.2)" ], "Encoding": "UTF-8", - "RoxygenNote": "7.3.1", + "RoxygenNote": "7.3.2", "NeedsCompilation": "no", - "Collate": "'aaa_generics.R' 'SaemixData.R' 'SaemixModel.R' 'SaemixRes.R' 'SaemixObject.R' 'backward.R' 'compute_LL.R' 'forward.R' 'func_FIM.R' 'func_aux.R' 'func_bootstrap.R' 'func_compare.R' 'func_discreteVPC.R' 'func_distcond.R' 'func_estimParam.R' 'func_exploreData.R' 'func_npde.R' 'func_plots.R' 'func_simulations.R' 'func_stepwise.R' 'main.R' 'main_estep.R' 'main_initialiseMainAlgo.R' 'main_mstep.R' 'stepwise.R' 'zzz.R'", - "Author": "Emmanuelle Comets [aut, cre], Audrey Lavenu [aut], Marc Lavielle [aut], Belhal Karimi [aut], Maud Delattre [ctb], Marilou Chanel [ctb], Johannes Ranke [ctb] (), Sofia Kaisaridi [ctb], Lucie Fayette [ctb]", + "Collate": "'aaa_generics.R' 'SaemixData.R' 'SaemixData-methods.R' 'SaemixData-methods_covariates.R' 'SaemixModel.R' 'SaemixRes.R' 'SaemixObject.R' 'backward.R' 'compute_LL.R' 'forward.R' 'func_FIM.R' 'func_aux.R' 'func_bootstrap.R' 'func_compare.R' 'func_discreteVPC.R' 'func_distcond.R' 'func_estimParam.R' 'func_exploreData.R' 'func_npde.R' 'func_plots.R' 'func_simulations.R' 'func_stepwise.R' 'main.R' 'main_estep.R' 'main_initialiseMainAlgo.R' 'main_mstep.R' 'stepwise.R' 'zzz.R'", + "Author": "Emmanuelle Comets [aut, cre], Audrey Lavenu [aut], Marc Lavielle [aut], Belhal Karimi [aut], Maud Delattre [ctb], Alexandra Lavalley-Morelle [ctb], Marilou Chanel [ctb], Johannes Ranke [ctb] (ORCID: ), Sofia Kaisaridi [ctb], Lucie Fayette [ctb]", "Maintainer": "Emmanuelle Comets ", - "Repository": "CRAN" + "Repository": "RSPM" }, "sandwich": { "Package": "sandwich", @@ -6567,7 +6277,7 @@ "NeedsCompilation": "no", "Author": "Achim Zeileis [aut, cre] (), Thomas Lumley [aut] (), Nathaniel Graham [ctb] (), Susanne Koell [ctb]", "Maintainer": "Achim Zeileis ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" }, "sass": { @@ -6713,7 +6423,7 @@ "URL": "https://meghapsimatrix.github.io/simhelpers/", "BugReports": "https://github.com/meghapsimatrix/simhelpers/issues", "Depends": [ - "R (>= 2.10)" + "R (>= 4.1.0)" ], "License": "GPL-3", "Encoding": "UTF-8", @@ -6744,14 +6454,15 @@ ], "RdMacros": "Rdpack", "VignetteBuilder": "knitr", - "Author": "Megha Joshi [aut, cre] (ORCID: ), James Pustejovsky [aut] (ORCID: )", - "Maintainer": "Megha Joshi ", "RemoteType": "github", - "RemoteUsername": "meghapsimatrix", + "RemoteHost": "api.github.com", "RemoteRepo": "simhelpers", - "RemoteRef": "master", - "RemoteSha": "3b1e25cc595de3432e56ee4a777baedf18bc1b78", - "RemoteHost": "api.github.com" + "RemoteUsername": "meghapsimatrix", + "RemoteRef": "HEAD", + "RemoteSha": "a512aa6844ed95aba4cd39102e08222624b51d56", + "NeedsCompilation": "no", + "Author": "Megha Joshi [aut, cre] (ORCID: ), James Pustejovsky [aut] (ORCID: )", + "Maintainer": "Megha Joshi " }, "sn": { "Package": "sn", @@ -6782,7 +6493,7 @@ "Encoding": "UTF-8", "NeedsCompilation": "no", "Author": "Adelchi Azzalini [aut, cre] ()", - "Repository": "CRAN" + "Repository": "RSPM" }, "snakecase": { "Package": "snakecase", @@ -6817,70 +6528,15 @@ "VignetteBuilder": "knitr", "NeedsCompilation": "no", "Author": "Malte Grosser [aut, cre]", - "Repository": "CRAN" - }, - "stargazer": { - "Package": "stargazer", - "Version": "5.2.3", - "Source": "Repository", - "Type": "Package", - "Title": "Well-Formatted Regression and Summary Statistics Tables", - "Date": "2022-03-03", - "Author": "Marek Hlavac ", - "Maintainer": "Marek Hlavac ", - "Description": "Produces LaTeX code, HTML/CSS code and ASCII text for well-formatted tables that hold regression analysis results from several models side-by-side, as well as summary statistics.", - "License": "GPL (>= 2)", - "Imports": [ - "stats", - "utils" - ], - "Enhances": [ - "AER", - "betareg", - "brglm", - "censReg", - "dynlm", - "eha", - "erer", - "ergm", - "fGarch", - "gee", - "glmx", - "gmm", - "lfe", - "lme4", - "lmtest", - "MASS", - "mclogit", - "mgcv", - "mlogit", - "nlme", - "nnet", - "ordinal", - "plm", - "pscl", - "quantreg", - "rms", - "relevent", - "robustbase", - "sampleSelection", - "spdep", - "survey", - "survival" - ], - "LazyLoad": "yes", - "Collate": "'stargazer-internal.R' 'stargazer.R'", - "NeedsCompilation": "no", - "Repository": "RSPM", - "Encoding": "UTF-8" + "Repository": "RSPM" }, "statmod": { "Package": "statmod", - "Version": "1.5.0", + "Version": "1.5.1", "Source": "Repository", - "Date": "2022-12-28", + "Date": "2025-10-08", "Title": "Statistical Modeling", - "Author": "Gordon Smyth [cre, aut], Lizhong Chen [aut], Yifang Hu [ctb], Peter Dunn [ctb], Belinda Phipson [ctb], Yunshun Chen [ctb]", + "Authors@R": "c(person(given = \"Gordon\", family = \"Smyth\", role = c(\"cre\", \"aut\"), email = \"smyth@wehi.edu.au\"), person(given = \"Lizhong\", family = \"Chen\", role = \"aut\"), person(given = \"Yifang\", family = \"Hu\", role = \"ctb\"), person(given = \"Peter\", family = \"Dunn\", role = \"ctb\"), person(given = \"Belinda\", family = \"Phipson\", role = \"ctb\"), person(given = \"Yunshun\", family = \"Chen\", role = \"ctb\"))", "Maintainer": "Gordon Smyth ", "Depends": [ "R (>= 3.0.0)" @@ -6896,7 +6552,8 @@ "Description": "A collection of algorithms and functions to aid statistical modeling. Includes limiting dilution analysis (aka ELDA), growth curve comparisons, mixed linear models, heteroscedastic regression, inverse-Gaussian probability calculations, Gauss quadrature and a secure convergence algorithm for nonlinear models. Also includes advanced generalized linear model functions including Tweedie and Digamma distributional families, secure convergence and exact distributional calculations for unit deviances.", "License": "GPL-2 | GPL-3", "NeedsCompilation": "yes", - "Repository": "CRAN", + "Author": "Gordon Smyth [cre, aut], Lizhong Chen [aut], Yifang Hu [ctb], Peter Dunn [ctb], Belinda Phipson [ctb], Yunshun Chen [ctb]", + "Repository": "RSPM", "Encoding": "UTF-8" }, "stringi": { @@ -6931,7 +6588,7 @@ }, "stringr": { "Package": "stringr", - "Version": "1.5.1", + "Version": "1.6.0", "Source": "Repository", "Title": "Simple, Consistent Wrappers for Common String Operations", "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\", \"cph\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -6940,7 +6597,7 @@ "URL": "https://stringr.tidyverse.org, https://github.com/tidyverse/stringr", "BugReports": "https://github.com/tidyverse/stringr/issues", "Depends": [ - "R (>= 3.6)" + "R (>= 4.1.0)" ], "Imports": [ "cli", @@ -6964,10 +6621,11 @@ ], "VignetteBuilder": "knitr", "Config/Needs/website": "tidyverse/tidytemplate", + "Config/potools/style": "explicit", "Config/testthat/edition": "3", "Encoding": "UTF-8", "LazyData": "true", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.3", "NeedsCompilation": "no", "Author": "Hadley Wickham [aut, cre, cph], Posit Software, PBC [cph, fnd]", "Maintainer": "Hadley Wickham ", @@ -6975,11 +6633,11 @@ }, "survey": { "Package": "survey", - "Version": "4.4-2", + "Version": "4.4-8", "Source": "Repository", "Title": "Analysis of Complex Survey Samples", "Description": "Summary statistics, two-sample tests, rank tests, generalised linear models, cumulative link models, Cox models, loglinear models, and general maximum pseudolikelihood estimation for multistage stratified, cluster-sampled, unequally weighted survey samples. Variances by Taylor series linearisation or replicate weights. Post-stratification, calibration, and raking. Two-phase subsampling designs. Graphics. PPS sampling without replacement. Small-area estimation.", - "Author": "Thomas Lumley, Peter Gao, Ben Schneider", + "Authors@R": "c(person(given = \"Thomas\", family = \"Lumley\", role = \"aut\"), person(given = \"Peter\", family = \"Gao\", role = \"aut\"), person(given = \"Ben\", family = \"Schneider\", role = \"aut\"), person(given = \"\\\"Thomas\", family = \"Lumley\\\"\", role = \"cre\", email = \"t.lumley@auckland.ac.nz\"))", "Maintainer": "\"Thomas Lumley\" ", "License": "GPL-2 | GPL-3", "Depends": [ @@ -7020,6 +6678,7 @@ ], "URL": "http://r-survey.r-forge.r-project.org/survey/", "NeedsCompilation": "yes", + "Author": "Thomas Lumley [aut], Peter Gao [aut], Ben Schneider [aut], \"Thomas Lumley\" [cre]", "Repository": "RSPM", "Encoding": "UTF-8" }, @@ -7055,7 +6714,7 @@ }, "svglite": { "Package": "svglite", - "Version": "2.2.1", + "Version": "2.2.2", "Source": "Repository", "Title": "An 'SVG' Graphics Device", "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Lionel\", \"Henry\", , \"lionel@posit.co\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"T Jake\", \"Luciani\", , \"jake@apache.org\", role = \"aut\"), person(\"Matthieu\", \"Decorde\", , \"matthieu.decorde@ens-lyon.fr\", role = \"aut\"), person(\"Vaudor\", \"Lise\", , \"lise.vaudor@ens-lyon.fr\", role = \"aut\"), person(\"Tony\", \"Plate\", role = \"ctb\", comment = \"Early line dashing code\"), person(\"David\", \"Gohel\", role = \"ctb\", comment = \"Line dashing code and early raster code\"), person(\"Yixuan\", \"Qiu\", role = \"ctb\", comment = \"Improved styles; polypath implementation\"), person(\"Håkon\", \"Malmedal\", role = \"ctb\", comment = \"Opacity code\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", @@ -7071,7 +6730,7 @@ "cli", "lifecycle", "rlang (>= 1.1.0)", - "systemfonts (>= 1.2.3)", + "systemfonts (>= 1.3.0)", "textshaping (>= 0.3.0)" ], "Suggests": [ @@ -7099,7 +6758,7 @@ "NeedsCompilation": "yes", "Author": "Hadley Wickham [aut], Lionel Henry [aut], Thomas Lin Pedersen [cre, aut] (ORCID: ), T Jake Luciani [aut], Matthieu Decorde [aut], Vaudor Lise [aut], Tony Plate [ctb] (Early line dashing code), David Gohel [ctb] (Line dashing code and early raster code), Yixuan Qiu [ctb] (Improved styles; polypath implementation), Håkon Malmedal [ctb] (Opacity code), Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Thomas Lin Pedersen ", - "Repository": "CRAN" + "Repository": "RSPM" }, "sys": { "Package": "sys", @@ -7127,7 +6786,7 @@ }, "systemfonts": { "Package": "systemfonts", - "Version": "1.2.3", + "Version": "1.3.1", "Source": "Repository", "Type": "Package", "Title": "System Native Font Finding", @@ -7150,9 +6809,12 @@ "Suggests": [ "covr", "farver", + "ggplot2", "graphics", "knitr", + "ragg", "rmarkdown", + "svglite", "testthat (>= 2.1.0)" ], "LinkingTo": [ @@ -7229,13 +6891,108 @@ "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, + "texreg": { + "Package": "texreg", + "Version": "1.39.4", + "Source": "Repository", + "Date": "2024-07-23", + "Title": "Conversion of R Regression Output to LaTeX or HTML Tables", + "Authors@R": "c(person(given = \"Philip\", family = \"Leifeld\", email = \"philip.leifeld@manchester.ac.uk\", role = c(\"aut\", \"cre\")), person(given = \"Claudia\", family = \"Zucca\", email = \"c.zucca@jads.nl\", role = \"ctb\"))", + "Description": "Converts coefficients, standard errors, significance stars, and goodness-of-fit statistics of statistical models into LaTeX tables or HTML tables/MS Word documents or to nicely formatted screen output for the R console for easy model comparison. A list of several models can be combined in a single table. The output is highly customizable. New model types can be easily implemented. Details can be found in Leifeld (2013), JStatSoft .)", + "URL": "https://github.com/leifeld/texreg/", + "BugReports": "https://github.com/leifeld/texreg/issues/", + "Suggests": [ + "broom (>= 0.4.2)", + "coda (>= 0.19.2)", + "ggplot2 (>= 3.1.0)", + "huxtable (>= 4.2.0)", + "knitr (>= 1.22)", + "rmarkdown (>= 1.12.3)", + "sandwich (>= 2.3-1)", + "systemfit (>= 1.1-0)", + "testthat (>= 2.0.0)", + "lmtest (>= 0.9-34)" + ], + "Depends": [ + "R (>= 3.5)" + ], + "Imports": [ + "methods", + "stats", + "httr" + ], + "Enhances": [ + "AER", + "alpaca", + "betareg", + "Bergm", + "bife", + "biglm", + "brglm", + "brms (>= 2.8.8)", + "btergm (>= 1.10.10)", + "dynlm", + "eha (>= 2.9.0)", + "ergm (>= 4.1.2)", + "feisr (>= 1.0.1)", + "fGarch", + "fixest (>= 0.10.5)", + "forecast", + "gamlss", + "gamlss.inf", + "gee", + "glmmTMB", + "gmm", + "gnm", + "h2o", + "latentnet", + "lfe", + "lme4 (>= 1.1.34)", + "logitr (>= 0.8.0)", + "lqmm", + "maxLik (>= 1.4.8)", + "metaSEM (>= 1.2.5.1)", + "mfx", + "mhurdle", + "miceadds", + "mlogit", + "MuMIn", + "nlme", + "nnet", + "oglmx", + "ordinal", + "pglm", + "plm (>= 2.4.1)", + "relevent", + "remify (>= 3.2.6)", + "remstats (>= 3.2.2)", + "remstimate (>= 2.3.11)", + "rms", + "robust", + "simex", + "spatialreg (>= 1.2.1)", + "spdep (>= 1.2.2)", + "speedglm", + "survival", + "truncreg (>= 0.2.5)", + "VGAM" + ], + "SystemRequirements": "pandoc (>= 1.12.3) suggested for using wordreg function; LaTeX packages tikz, booktabs, dcolumn, rotating, thumbpdf, longtable, paralist for the vignette", + "License": "GPL-3", + "Encoding": "UTF-8", + "RoxygenNote": "7.3.1", + "NeedsCompilation": "no", + "Author": "Philip Leifeld [aut, cre], Claudia Zucca [ctb]", + "Maintainer": "Philip Leifeld ", + "Repository": "CRAN" + }, "textshaping": { "Package": "textshaping", - "Version": "1.0.1", + "Version": "1.0.4", "Source": "Repository", "Title": "Bindings to the 'HarfBuzz' and 'Fribidi' Libraries for Text Shaping", "Authors@R": "c( person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", - "Description": "Provides access to the text shaping functionality in the 'HarfBuzz' library and the bidirectional algorithm in the 'Fribidi' library. 'textshaping' is a low-level utility package mainly for graphic devices that expands upon the font tool-set provided by the 'systemfonts' package.", + "Description": "Provides access to the text shaping functionality in the 'HarfBuzz' library and the bidirectional algorithm in the 'Fribidi' library. 'textshaping' is a low-level utility package mainly for graphic devices that expands upon the font tool-set provided by the 'systemfonts' package.", "License": "MIT + file LICENSE", "URL": "https://github.com/r-lib/textshaping", "BugReports": "https://github.com/r-lib/textshaping/issues", @@ -7246,7 +7003,7 @@ "lifecycle", "stats", "stringi", - "systemfonts (>= 1.1.0)", + "systemfonts (>= 1.3.0)", "utils" ], "Suggests": [ @@ -7359,7 +7116,7 @@ ], "Config/testthat/edition": "3", "NeedsCompilation": "no", - "Repository": "RSPM" + "Repository": "CRAN" }, "tidyr": { "Package": "tidyr", @@ -7691,7 +7448,7 @@ "VignetteBuilder": "knitr", "NeedsCompilation": "no", "Author": "Charlotte Baey [aut, cre] (), Estelle Kuhn [aut]", - "Repository": "CRAN" + "Repository": "RSPM" }, "vctrs": { "Package": "vctrs", @@ -7770,7 +7527,7 @@ }, "vroom": { "Package": "vroom", - "Version": "1.6.5", + "Version": "1.6.6", "Source": "Repository", "Title": "Read and Write Rectangular Text Data Quickly", "Authors@R": "c( person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Shelby\", \"Bearrows\", role = \"ctb\"), person(\"https://github.com/mandreyel/\", role = \"cph\", comment = \"mio library\"), person(\"Jukka\", \"Jylänki\", role = \"cph\", comment = \"grisu3 implementation\"), person(\"Mikkel\", \"Jørgensen\", role = \"cph\", comment = \"grisu3 implementation\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", @@ -7822,7 +7579,7 @@ ], "LinkingTo": [ "cpp11 (>= 0.2.0)", - "progress (>= 1.2.1)", + "progress (>= 1.2.3)", "tzdb (>= 0.1.1)" ], "VignetteBuilder": "knitr", @@ -7832,9 +7589,9 @@ "Copyright": "file COPYRIGHTS", "Encoding": "UTF-8", "Language": "en-US", - "RoxygenNote": "7.2.3.9000", + "RoxygenNote": "7.3.3", "NeedsCompilation": "yes", - "Author": "Jim Hester [aut] (), Hadley Wickham [aut] (), Jennifer Bryan [aut, cre] (), Shelby Bearrows [ctb], https://github.com/mandreyel/ [cph] (mio library), Jukka Jylänki [cph] (grisu3 implementation), Mikkel Jørgensen [cph] (grisu3 implementation), Posit Software, PBC [cph, fnd]", + "Author": "Jim Hester [aut] (ORCID: ), Hadley Wickham [aut] (ORCID: ), Jennifer Bryan [aut, cre] (ORCID: ), Shelby Bearrows [ctb], https://github.com/mandreyel/ [cph] (mio library), Jukka Jylänki [cph] (grisu3 implementation), Mikkel Jørgensen [cph] (grisu3 implementation), Posit Software, PBC [cph, fnd]", "Maintainer": "Jennifer Bryan ", "Repository": "CRAN" }, @@ -7915,7 +7672,7 @@ }, "xfun": { "Package": "xfun", - "Version": "0.52", + "Version": "0.54", "Source": "Repository", "Type": "Package", "Title": "Supporting Functions for Packages Maintained by 'Yihui Xie'", @@ -7937,7 +7694,7 @@ "rstudioapi", "tinytex (>= 0.30)", "mime", - "litedown (>= 0.4)", + "litedown (>= 0.6)", "commonmark", "knitr (>= 1.50)", "remotes", @@ -7947,22 +7704,23 @@ "jsonlite", "magick", "yaml", + "data.table", "qs" ], "License": "MIT + file LICENSE", "URL": "https://github.com/yihui/xfun", "BugReports": "https://github.com/yihui/xfun/issues", "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", + "RoxygenNote": "7.3.3", "VignetteBuilder": "litedown", "NeedsCompilation": "yes", - "Author": "Yihui Xie [aut, cre, cph] (, https://yihui.org), Wush Wu [ctb], Daijiang Li [ctb], Xianying Tan [ctb], Salim Brüggemann [ctb] (), Christophe Dervieux [ctb]", + "Author": "Yihui Xie [aut, cre, cph] (ORCID: , URL: https://yihui.org), Wush Wu [ctb], Daijiang Li [ctb], Xianying Tan [ctb], Salim Brüggemann [ctb] (ORCID: ), Christophe Dervieux [ctb]", "Maintainer": "Yihui Xie ", - "Repository": "RSPM" + "Repository": "CRAN" }, "xml2": { "Package": "xml2", - "Version": "1.3.8", + "Version": "1.4.1", "Source": "Repository", "Title": "Parse XML", "Authors@R": "c( person(\"Hadley\", \"Wickham\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\"), person(\"Jeroen\", \"Ooms\", email = \"jeroenooms@gmail.com\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(\"R Foundation\", role = \"ctb\", comment = \"Copy of R-project homepage cached as example\") )", @@ -7983,7 +7741,6 @@ "curl", "httr", "knitr", - "magrittr", "mockery", "rmarkdown", "testthat (>= 3.2.0)", @@ -7992,7 +7749,7 @@ "VignetteBuilder": "knitr", "Config/Needs/website": "tidyverse/tidytemplate", "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", + "RoxygenNote": "7.3.3", "SystemRequirements": "libxml2: libxml2-dev (deb), libxml2-devel (rpm)", "Collate": "'S4.R' 'as_list.R' 'xml_parse.R' 'as_xml_document.R' 'classes.R' 'format.R' 'import-standalone-obj-type.R' 'import-standalone-purrr.R' 'import-standalone-types-check.R' 'init.R' 'nodeset_apply.R' 'paths.R' 'utils.R' 'xml2-package.R' 'xml_attr.R' 'xml_children.R' 'xml_document.R' 'xml_find.R' 'xml_missing.R' 'xml_modify.R' 'xml_name.R' 'xml_namespaces.R' 'xml_node.R' 'xml_nodeset.R' 'xml_path.R' 'xml_schema.R' 'xml_serialize.R' 'xml_structure.R' 'xml_text.R' 'xml_type.R' 'xml_url.R' 'xml_write.R' 'zzz.R'", "Config/testthat/edition": "3", @@ -8060,7 +7817,7 @@ "NeedsCompilation": "yes", "Author": "Achim Zeileis [aut, cre] (), Gabor Grothendieck [aut], Jeffrey A. Ryan [aut], Joshua M. Ulrich [ctb], Felix Andrews [ctb]", "Maintainer": "Achim Zeileis ", - "Repository": "CRAN", + "Repository": "RSPM", "Encoding": "UTF-8" } } From 0b4a9613f0bcd9a8d9deb06ab8c8dac3f93f2b13 Mon Sep 17 00:00:00 2001 From: jepusto Date: Mon, 10 Nov 2025 22:37:53 -0600 Subject: [PATCH 10/10] Fixed a few typos. --- 072-presentation-of-results.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/072-presentation-of-results.Rmd b/072-presentation-of-results.Rmd index 588e357..b30e37b 100644 --- a/072-presentation-of-results.Rmd +++ b/072-presentation-of-results.Rmd @@ -38,7 +38,7 @@ Good analysis will provide a clear understanding of how one or more of the simul In multi-factor simulations, the major challenge in analyzing simulation results is dealing with the multiplicity and dimensional nature of the results. For instance, in our cluster RCT simulation, we calculated performance metrics in each of `r prettyNum( nrow(sres) / 3, big.mark=",")` different simulation scenarios, which vary along several factors. For each scenario, we calculated a whole suite of performance measures (bias, SE, RMSE, coverage, ...), and we have these performance measures for each of three estimation methods under consideration. -We organizeed all these results as a table with `r prettyNum( nrow(sres), big.mark=",")` rows (three rows per simulation scenario, with each row corresponding to a specific method) and one column per performance metric. +We organized all these results as a table with `r prettyNum( nrow(sres), big.mark=",")` rows (three rows per simulation scenario, with each row corresponding to a specific method) and one column per performance metric. Navigating all of this can feel somewhat overwhelming. How do we understand trends in this complex, multi-factor data structure? @@ -379,7 +379,7 @@ The $x$-axis shows each of our five methods we are comparing. The boxplots are "holding" the other factors, and show the Type-I error rates for the different small-sample corrections across the covariates tested and degree of model misspecification. We add a line at the target 0.05 rejection rate to ease comparison. The reach of the boxes shows how some methods are more or less vulnerable to different types of misspecification. -Some estimators (e.g., $T^2_A$) are clearly hyper-conservitive, with very low rejection rates. +Some estimators (e.g., $T^2_A$) are clearly hyper-conservative, with very low rejection rates. Other methods (e.g., EDF), have a range of very high rejection rates when $m = 10$; the degree of rejection rate must depend on model mis-specification and number of covariates tested (the things in the boxes).