CRAN Package Check Results for Package mlexperiments

Last updated on 2025-12-26 11:52:14 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.0.8 11.98 286.03 298.01 OK
r-devel-linux-x86_64-debian-gcc 0.0.8 8.19 218.41 226.60 OK
r-devel-linux-x86_64-fedora-clang 0.0.8 21.00 467.24 488.24 OK
r-devel-linux-x86_64-fedora-gcc 0.0.8 21.00 627.98 648.98 OK
r-devel-windows-x86_64 0.0.8 13.00 406.00 419.00 ERROR
r-patched-linux-x86_64 0.0.8 10.73 271.95 282.68 OK
r-release-linux-x86_64 0.0.8 11.88 281.95 293.83 OK
r-release-macos-arm64 0.0.8 OK
r-release-macos-x86_64 0.0.8 7.00 362.00 369.00 OK
r-release-windows-x86_64 0.0.8 14.00 407.00 421.00 OK
r-oldrel-macos-arm64 0.0.8 OK
r-oldrel-macos-x86_64 0.0.8 7.00 377.00 384.00 OK
r-oldrel-windows-x86_64 0.0.8 20.00 571.00 591.00 OK

Check Details

Version: 0.0.8
Check: examples
Result: ERROR Running examples in 'mlexperiments-Ex.R' failed The error most likely occurred in: > ### Name: performance > ### Title: performance > ### Aliases: performance > > ### ** Examples > > dataset <- do.call( + cbind, + c(sapply(paste0("col", 1:6), function(x) { + rnorm(n = 500) + }, + USE.NAMES = TRUE, + simplify = FALSE + ), + list(target = sample(0:1, 500, TRUE)) + )) > > fold_list <- splitTools::create_folds( + y = dataset[, 7], + k = 3, + type = "stratified", + seed = 123 + ) > > glm_optimization <- mlexperiments::MLCrossValidation$new( + learner = LearnerGlm$new(), + fold_list = fold_list, + seed = 123 + ) > > glm_optimization$learner_args <- list(family = binomial(link = "logit")) > glm_optimization$predict_args <- list(type = "response") > glm_optimization$performance_metric_args <- list( + positive = "1", + negative = "0" + ) > glm_optimization$performance_metric <- list( + auc = metric("AUC"), sensitivity = metric("TPR"), + specificity = metric("TNR") + ) > glm_optimization$return_models <- TRUE > > # set data > glm_optimization$set_data( + x = data.matrix(dataset[, -7]), + y = dataset[, 7] + ) > > cv_results <- glm_optimization$execute() CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. > > # predictions > preds <- mlexperiments::predictions( + object = glm_optimization, + newdata = data.matrix(dataset[, -7]), + na.rm = FALSE, + ncores = 2L, + type = "response" + ) Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), : attempt access index 3/3 in VECTOR_ELT Calls: <Anonymous> -> [ -> [.data.table Execution halted Flavor: r-devel-windows-x86_64

Version: 0.0.8
Check: tests
Result: ERROR Running 'testthat.R' [296s] Running the tests in 'tests/testthat.R' failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 22.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.64 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 23.36 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.64 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 8.95 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.52 seconds 3) Running FUN 2 times in 2 thread(s)... 3.55 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.58 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.57 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.45 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.56 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 8.23 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.58 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 18.5 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.45 seconds 3) Running FUN 2 times in 2 thread(s)... 3.5 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.56 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.47 seconds 3) Running FUN 2 times in 2 thread(s)... 1.36 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.78 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.46 seconds 3) Running FUN 2 times in 2 thread(s)... 1.46 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.33 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.47 seconds 3) Running FUN 2 times in 2 thread(s)... 1.5 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.76 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.49 seconds 3) Running FUN 2 times in 2 thread(s)... 0.18 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.75 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.45 seconds 3) Running FUN 2 times in 2 thread(s)... 0.24 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.79 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.42 seconds 3) Running FUN 2 times in 2 thread(s)... 0.19 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 2.78 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.46 seconds 3) Running FUN 2 times in 2 thread(s)... 0.19 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-windows-x86_64