| Title: | Unit Testing for R |
| Version: | 3.3.0 |
| Description: | Software testing is important, but, in part because it is frustrating and boring, many of us avoid it. 'testthat' is a testing framework for R that is easy to learn and use, and integrates with your existing 'workflow'. |
| License: | MIT + file LICENSE |
| URL: | https://testthat.r-lib.org, https://github.com/r-lib/testthat |
| BugReports: | https://github.com/r-lib/testthat/issues |
| Depends: | R (≥ 4.1.0) |
| Imports: | brio (≥ 1.1.5), callr (≥ 3.7.6), cli (≥ 3.6.5), desc (≥ 1.4.3), evaluate (≥ 1.0.4), jsonlite (≥ 2.0.0), lifecycle (≥ 1.0.4), magrittr (≥ 2.0.3), methods, pkgload (≥ 1.4.0), praise (≥ 1.0.0), processx (≥ 3.8.6), ps (≥ 1.9.1), R6 (≥ 2.6.1), rlang (≥ 1.1.6), utils, waldo (≥ 0.6.2), withr (≥ 3.0.2) |
| Suggests: | covr, curl (≥ 0.9.5), diffviewer (≥ 0.1.0), digest (≥ 0.6.33), gh, knitr, rmarkdown, rstudioapi, S7, shiny, usethis, vctrs (≥ 0.1.0), xml2 |
| VignetteBuilder: | knitr |
| Config/Needs/website: | tidyverse/tidytemplate |
| Config/testthat/edition: | 3 |
| Config/testthat/parallel: | true |
| Config/testthat/start-first: | watcher, parallel* |
| Encoding: | UTF-8 |
| RoxygenNote: | 7.3.3 |
| NeedsCompilation: | yes |
| Packaged: | 2025-11-11 14:12:42 UTC; hadleywickham |
| Author: | Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd], R Core team [ctb] (Implementation of utils::recover()) |
| Maintainer: | Hadley Wickham <hadley@posit.co> |
| Repository: | CRAN |
| Date/Publication: | 2025-11-13 14:50:02 UTC |
An R package to make testing fun!
Description
Try the example below. Have a look at the references and learn more
from function documentation such as test_that().
Options
-
testthat.use_colours: Should the output be coloured? (Default:TRUE). -
testthat.summary.max_reports: The maximum number of detailed test reports printed for the summary reporter (default: 10). -
testthat.summary.omit_dots: Omit progress dots in the summary reporter (default:FALSE).
Author(s)
Maintainer: Hadley Wickham hadley@posit.co
Other contributors:
Posit Software, PBC [copyright holder, funder]
R Core team (Implementation of utils::recover()) [contributor]
See Also
Useful links:
Report bugs at https://github.com/r-lib/testthat/issues
Report results for R CMD check
Description
R CMD check displays only the last 13 lines of the result, so this
report is designed to ensure that you see something useful there.
See Also
Other reporters:
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Interactively debug failing tests
Description
This reporter will call a modified version of recover() on all
broken expectations.
See Also
Other reporters:
CheckReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Fail if any tests fail
Description
This reporter will simply throw an error if any of the tests failed. It is best combined with another reporter, such as the SummaryReporter.
See Also
Other reporters:
CheckReporter,
DebugReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Report results in jUnit XML format
Description
This reporter includes detailed results about each test and summaries, written to a file (or stdout) in jUnit XML format. This can be read by the Jenkins Continuous Integration System to report on a dashboard etc. Requires the xml2 package.
To fit into the jUnit structure, context() becomes the <testsuite>
name as well as the base of the <testcase> classname. The
test_that() name becomes the rest of the <testcase> classname.
The deparsed expect_that() call becomes the <testcase> name.
On failure, the message goes into the <failure> node message
argument (first line only) and into its text content (full message).
Execution time and some other details are also recorded.
References for the jUnit XML format: https://github.com/testmoapp/junitxml
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Capture test results and metadata
Description
This reporter gathers all results, adding additional information such as test elapsed time, and test filename if available. Very useful for reporting.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Test reporter: location
Description
This reporter simply prints the location of every expectation and error. This is useful if you're trying to figure out the source of a segfault, or you want to figure out which code triggers a C/C++ breakpoint
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Report minimal results as compactly as possible
Description
The minimal test reporter provides the absolutely minimum amount of information: whether each expectation has succeeded, failed or experienced an error. If you want to find out what the failures and errors actually were, you'll need to run a more informative test reporter.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Run multiple reporters at the same time
Description
This reporter is useful to use several reporters at the same time, e.g. adding a custom reporter without removing the current one.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Report progress interactively
Description
ProgressReporter is designed for interactive use. Its goal is to
give you actionable insights to help you understand the status of your
code. This reporter also praises you from time-to-time if all your tests
pass. It's the default reporter for test_dir().
ParallelProgressReporter is very similar to ProgressReporter, but
works better for packages that want parallel tests.
CompactProgressReporter is a minimal version of ProgressReporter
designed for use with single files. It's the default reporter for
test_file().
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Report results to RStudio
Description
This reporter is designed for output to RStudio. It produces results in any easily parsed form.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Manage test reporting
Description
The job of a reporter is to aggregate the results from files, tests, and
expectations and display them in an informative way. Every testthat function
that runs multiple tests provides a reporter argument which you can
use to override the default (which is selected by default_reporter()).
Details
You only need to use this Reporter object directly if you are creating
a new reporter. Currently, creating new Reporters is undocumented,
so if you want to create your own, you'll need to make sure that you're
familiar with R6 and then need read the
source code for a few.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Examples
path <- testthat_example("success")
test_file(path)
# Override the default by supplying the name of a reporter
test_file(path, reporter = "minimal")
Silently collect and all expectations
Description
This reporter quietly runs all tests, simply gathering all expectations.
This is helpful for programmatically inspecting errors after a test run.
You can retrieve the results with $expectations().
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Find slow tests
Description
SlowReporter is designed to identify slow tests. It reports the
execution time for each test and can optionally filter out tests that
run faster than a specified threshold (default: 1 second). This reporter
is useful for performance optimization and identifying tests that may
benefit from optimization or parallelization.
SlowReporter is designed to identify slow tests. It reports the
execution time for each test, ignoring tests faster than a specified
threshold (default: 0.5s).
The easiest way to run it over your package is with
devtools::test(reporter = "slow").
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
StopReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Error if any test fails
Description
The default reporter used when expect_that() is run interactively.
It responds by displaying a summary of the number of successes and failures
and stop()ping on if there are any failures.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
SummaryReporter,
TapReporter,
TeamcityReporter
Report a summary of failures
Description
This is designed for interactive usage: it lets you know which tests have run successfully and as well as fully reporting information about failures and errors.
You can use the max_reports field to control the maximum number
of detailed reports produced by this reporter.
As an additional benefit, this reporter will praise you from time-to-time if all your tests pass.
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
TapReporter,
TeamcityReporter
Report results in TAP format
Description
This reporter will output results in the Test Anything Protocol (TAP), a simple text-based interface between testing modules in a test harness. For more information about TAP, see http://testanything.org
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TeamcityReporter
Report results in Teamcity format
Description
This reporter will output results in the Teamcity message format. For more information about Teamcity messages, see http://confluence.jetbrains.com/display/TCD7/Build+Script+Interaction+with+TeamCity
See Also
Other reporters:
CheckReporter,
DebugReporter,
FailReporter,
JunitReporter,
ListReporter,
LocationReporter,
MinimalReporter,
MultiReporter,
ProgressReporter,
RStudioReporter,
Reporter,
SilentReporter,
SlowReporter,
StopReporter,
SummaryReporter,
TapReporter
Watches code and tests for changes, rerunning tests as appropriate.
Description
The idea behind auto_test() is that you just leave it running while
you develop your code. Every time you save a file it will be automatically
tested and you can easily see if your changes have caused any test
failures.
The current strategy for rerunning tests is as follows:
if any code has changed, then those files are reloaded and all tests rerun
otherwise, each new or modified test is run
Usage
auto_test(
code_path,
test_path,
reporter = default_reporter(),
env = test_env(),
hash = TRUE
)
auto_test_package(pkg = ".", reporter = default_reporter(), hash = TRUE)
Arguments
code_path |
path to directory containing code |
test_path |
path to directory containing tests |
reporter |
test reporter to use |
env |
environment in which to execute test suite. |
hash |
Passed on to |
pkg |
path to package |
See Also
Capture conditions, including messages, warnings, expectations, and errors.
Description
These functions allow you to capture the side-effects of a function call
including printed output, messages and warnings. We no longer recommend
that you use these functions, instead relying on the expect_message()
and friends to bubble up unmatched conditions. If you just want to silence
unimportant warnings, use suppressWarnings().
Usage
capture_condition(code, entrace = FALSE)
capture_error(code, entrace = FALSE)
capture_expectation(code, entrace = FALSE)
capture_message(code, entrace = FALSE)
capture_warning(code, entrace = FALSE)
capture_messages(code)
capture_warnings(code, ignore_deprecation = FALSE)
Arguments
code |
Code to evaluate |
entrace |
Whether to add a backtrace to the captured condition. |
Value
Singular functions (capture_condition, capture_expectation etc)
return a condition object. capture_messages() and capture_warnings
return a character vector of message text.
Examples
f <- function() {
message("First")
warning("Second")
message("Third")
}
capture_message(f())
capture_messages(f())
capture_warning(f())
capture_warnings(f())
# Condition will capture anything
capture_condition(f())
Capture output to console
Description
Evaluates code in a special context in which all output is captured,
similar to capture.output().
Usage
capture_output(code, print = FALSE, width = 80)
capture_output_lines(code, print = FALSE, width = 80)
testthat_print(x)
Arguments
code |
Code to evaluate. |
print |
If |
width |
Number of characters per line of output. This does not
inherit from |
Details
Results are printed using the testthat_print() generic, which defaults
to print(), giving you the ability to customise the printing of your
object in tests, if needed.
Value
capture_output() returns a single string. capture_output_lines()
returns a character vector with one entry for each line
Examples
capture_output({
cat("Hi!\n")
cat("Bye\n")
})
capture_output_lines({
cat("Hi!\n")
cat("Bye\n")
})
capture_output("Hi")
capture_output("Hi", print = TRUE)
Provide human-readable comparison of two objects
Description
compare is similar to base::all.equal(), but somewhat buggy in its
use of tolerance. Please use waldo instead.
Usage
compare(x, y, ...)
## Default S3 method:
compare(x, y, ..., max_diffs = 9)
## S3 method for class 'character'
compare(
x,
y,
check.attributes = TRUE,
...,
max_diffs = 5,
max_lines = 5,
width = cli::console_width()
)
## S3 method for class 'numeric'
compare(
x,
y,
tolerance = testthat_tolerance(),
check.attributes = TRUE,
...,
max_diffs = 9
)
## S3 method for class 'POSIXt'
compare(x, y, tolerance = 0.001, ..., max_diffs = 9)
Arguments
x, y |
Objects to compare |
... |
Additional arguments used to control specifics of comparison |
max_diffs |
Maximum number of differences to show |
check.attributes |
If |
max_lines |
Maximum number of lines to show from each difference |
width |
Width of output device |
tolerance |
Numerical tolerance: any differences (in the sense of
The default tolerance is |
Examples
# Character -----------------------------------------------------------------
x <- c("abc", "def", "jih")
compare(x, x)
y <- paste0(x, "y")
compare(x, y)
compare(letters, paste0(letters, "-"))
x <- "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis cursus
tincidunt auctor. Vestibulum ac metus bibendum, facilisis nisi non, pulvinar
dolor. Donec pretium iaculis nulla, ut interdum sapien ultricies a. "
y <- "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis cursus
tincidunt auctor. Vestibulum ac metus1 bibendum, facilisis nisi non, pulvinar
dolor. Donec pretium iaculis nulla, ut interdum sapien ultricies a. "
compare(x, y)
compare(c(x, x), c(y, y))
# Numeric -------------------------------------------------------------------
x <- y <- runif(100)
y[sample(100, 10)] <- 5
compare(x, y)
x <- y <- 1:10
x[5] <- NA
x[6] <- 6.5
compare(x, y)
# Compare ignores minor numeric differences in the same way
# as all.equal.
compare(x, x + 1e-9)
Compare two directory states.
Description
Compare two directory states.
Usage
compare_state(old, new)
Arguments
old |
previous state |
new |
current state |
Value
list containing number of changes and files which have been
added, deleted and modified
Do you expect a value bigger or smaller than this?
Description
These functions compare values of comparable data types, such as numbers, dates, and times.
Usage
expect_lt(object, expected, label = NULL, expected.label = NULL)
expect_lte(object, expected, label = NULL, expected.label = NULL)
expect_gt(object, expected, label = NULL, expected.label = NULL)
expect_gte(object, expected, label = NULL, expected.label = NULL)
Arguments
object, expected |
A value to compare and its expected bound. |
label, expected.label |
Used to customise failure messages. For expert use only. |
See Also
Other expectations:
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
a <- 9
expect_lt(a, 10)
## Not run:
expect_lt(11, 10)
## End(Not run)
a <- 11
expect_gt(a, 10)
## Not run:
expect_gt(9, 10)
## End(Not run)
Describe the context of a set of tests.
Description
Use of context() is no longer recommended. Instead omit it, and messages
will use the name of the file instead. This ensures that the context and
test file name are always in sync.
A context defines a set of tests that test related functionality. Usually you will have one context per file, but you may have multiple contexts in a single file if you so choose.
Usage
context(desc)
Arguments
desc |
description of context. Should start with a capital letter. |
3rd edition
context() is deprecated in the third edition, and the equivalent
information is instead recorded by the test file name.
Examples
context("String processing")
context("Remote procedure calls")
Start test context from a file name
Description
For use in external reporters
Usage
context_start_file(name)
Arguments
name |
file name |
Retrieve the default reporter
Description
The defaults are:
-
ProgressReporter for interactive, non-parallel; override with
testthat.default_reporter -
ParallelProgressReporter for interactive, parallel packages; override with
testthat.default_parallel_reporter -
CompactProgressReporter for single-file interactive; override with
testthat.default_compact_reporter -
CheckReporter for R CMD check; override with
testthat.default_check_reporter
Usage
default_reporter()
default_parallel_reporter()
default_compact_reporter()
check_reporter()
describe: a BDD testing language
Description
A simple behavior-driven development (BDD) domain-specific language for writing tests. The language is similar to RSpec for Ruby or Mocha for JavaScript. BDD tests read like sentences and it should thus be easier to understand what the specification of a function/component is.
Usage
describe(description, code)
it(description, code = NULL)
Arguments
description |
description of the feature |
code |
test code containing the specs |
Details
Tests using the describe syntax not only verify the tested code, but
also document its intended behaviour. Each describe block specifies a
larger component or function and contains a set of specifications. A
specification is defined by an it block. Each it block
functions as a test and is evaluated in its own environment. You
can also have nested describe blocks.
This test syntax helps to test the intended behaviour of your code. For
example: you want to write a new function for your package. Try to describe
the specification first using describe, before your write any code.
After that, you start to implement the tests for each specification (i.e.
the it block).
Use describe to verify that you implement the right things and use
test_that() to ensure you do the things right.
Examples
describe("matrix()", {
it("can be multiplied by a scalar", {
m1 <- matrix(1:4, 2, 2)
m2 <- m1 * 2
expect_equal(matrix(1:4 * 2, 2, 2), m2)
})
it("can have not yet tested specs")
})
# Nested specs:
## code
addition <- function(a, b) a + b
division <- function(a, b) a / b
## specs
describe("math library", {
describe("addition()", {
it("can add two numbers", {
expect_equal(1 + 1, addition(1, 1))
})
})
describe("division()", {
it("can divide two numbers", {
expect_equal(10 / 2, division(10, 2))
})
it("can handle division by 0") #not yet implemented
})
})
Capture the state of a directory.
Description
Capture the state of a directory.
Usage
dir_state(path, pattern = NULL, hash = TRUE)
Arguments
path |
path to directory |
pattern |
regular expression with which to filter files |
hash |
use hash (slow but accurate) or time stamp (fast but less accurate) |
Do you expect this value?
Description
These functions provide two levels of strictness when comparing a
computation to a reference value. expect_identical() is the baseline;
expect_equal() relaxes the test to ignore small numeric differences.
In the 2nd edition, expect_identical() uses identical() and
expect_equal uses all.equal(). In the 3rd edition, both functions use
waldo. They differ only in that
expect_equal() sets tolerance = testthat_tolerance() so that small
floating point differences are ignored; this also implies that (e.g.) 1
and 1L are treated as equal.
Usage
expect_equal(
object,
expected,
...,
tolerance = if (edition_get() >= 3) testthat_tolerance(),
info = NULL,
label = NULL,
expected.label = NULL
)
expect_identical(
object,
expected,
info = NULL,
label = NULL,
expected.label = NULL,
...
)
Arguments
object, expected |
Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
... |
3e: passed on to 2e: passed on to |
tolerance |
3e: passed on to 2e: passed on to |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label, expected.label |
Used to customise failure messages. For expert use only. |
See Also
-
expect_setequal()/expect_mapequal()to test for set equality. -
expect_reference()to test if two names point to same memory address.
Other expectations:
comparison-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
a <- 10
expect_equal(a, 10)
# Use expect_equal() when testing for numeric equality
## Not run:
expect_identical(sqrt(2) ^ 2, 2)
## End(Not run)
expect_equal(sqrt(2) ^ 2, 2)
Evaluate a promise, capturing all types of output.
Description
Evaluate a promise, capturing all types of output.
Usage
evaluate_promise(code, print = FALSE)
Arguments
code |
Code to evaluate. |
Value
A list containing
result |
The result of the function |
output |
A string containing all the output from the function |
warnings |
A character vector containing the text from each warning |
messages |
A character vector containing the text from each message |
Examples
evaluate_promise({
print("1")
message("2")
warning("3")
4
})
The previous building block of all expect_ functions
Description
Previously, we recommended using expect() when writing your own
expectations. Now we instead recommend pass() and fail(). See
vignette("custom-expectation") for details.
Usage
expect(
ok,
failure_message,
info = NULL,
srcref = NULL,
trace = NULL,
trace_env = caller_env()
)
Arguments
ok |
|
failure_message |
A character vector describing the failure. The first element should describe the expected value, and the second (and optionally subsequence) elements should describe what was actually seen. |
info |
Character vector continuing additional information. Included for backward compatibility only and new expectations should not use it. |
srcref |
Location of the failure. Should only needed to be explicitly supplied when you need to forward a srcref captured elsewhere. |
trace |
An optional backtrace created by |
trace_env |
If |
Value
An expectation object from either succeed() or fail().
with a muffle_expectation restart.
See Also
Do you expect every value in a vector to have this value?
Description
These expectations are similar to expect_true(all(x == "x")),
expect_true(all(x)) and expect_true(all(!x)) but give more informative
failure messages if the expectations are not met.
Usage
expect_all_equal(object, expected)
expect_all_true(object)
expect_all_false(object)
Arguments
object, expected |
Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
Examples
x1 <- c(1, 1, 1, 1, 1, 1)
expect_all_equal(x1, 1)
x2 <- c(1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2)
show_failure(expect_all_equal(x2, 1))
# expect_all_true() and expect_all_false() are helpers for common cases
set.seed(1016)
show_failure(expect_all_true(rpois(100, 10) < 20))
show_failure(expect_all_false(rpois(100, 10) > 20))
Do C++ tests past?
Description
Test compiled code in the package package. A call to this function will
automatically be generated for you in tests/testthat/test-cpp.R after
calling use_catch(); you should not need to manually call this expectation
yourself.
Usage
expect_cpp_tests_pass(package)
run_cpp_tests(package)
Arguments
package |
The name of the package to test. |
Is an object equal to the expected value, ignoring attributes?
Description
Compares object and expected using all.equal() and
check.attributes = FALSE.
Usage
expect_equivalent(
object,
expected,
...,
info = NULL,
label = NULL,
expected.label = NULL
)
Arguments
object, expected |
Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
... |
Passed on to |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label, expected.label |
Used to customise failure messages. For expert use only. |
3rd edition
expect_equivalent() is deprecated in the 3rd edition. Instead use
expect_equal(ignore_attr = TRUE).
Examples
#' # expect_equivalent() ignores attributes
a <- b <- 1:3
names(b) <- letters[1:3]
## Not run:
expect_equal(a, b)
## End(Not run)
expect_equivalent(a, b)
Do you expect an error, warning, message, or other condition?
Description
expect_error(), expect_warning(), expect_message(), and
expect_condition() check that code throws an error, warning, message,
or condition with a message that matches regexp, or a class that inherits
from class. See below for more details.
In the 3rd edition, these functions match (at most) a single condition. All
additional and non-matching (if regexp or class are used) conditions
will bubble up outside the expectation. If these additional conditions
are important you'll need to catch them with additional
expect_message()/expect_warning() calls; if they're unimportant you
can ignore with suppressMessages()/suppressWarnings().
It can be tricky to test for a combination of different conditions,
such as a message followed by an error. expect_snapshot() is
often an easier alternative for these more complex cases.
Usage
expect_error(
object,
regexp = NULL,
class = NULL,
...,
inherit = TRUE,
info = NULL,
label = NULL
)
expect_warning(
object,
regexp = NULL,
class = NULL,
...,
inherit = TRUE,
all = FALSE,
info = NULL,
label = NULL
)
expect_message(
object,
regexp = NULL,
class = NULL,
...,
inherit = TRUE,
all = FALSE,
info = NULL,
label = NULL
)
expect_condition(
object,
regexp = NULL,
class = NULL,
...,
inherit = TRUE,
info = NULL,
label = NULL
)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
regexp |
Regular expression to test against.
Note that you should only use |
class |
Instead of supplying a regular expression, you can also supply a class name. This is useful for "classed" conditions. |
... |
Arguments passed on to
|
inherit |
Whether to match |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
all |
DEPRECATED If you need to test multiple warnings/messages
you now need to use multiple calls to |
Value
If regexp = NA, the value of the first argument; otherwise
the captured condition.
Testing message vs class
When checking that code generates an error, it's important to check that the
error is the one you expect. There are two ways to do this. The first
way is the simplest: you just provide a regexp that match some fragment
of the error message. This is easy, but fragile, because the test will
fail if the error message changes (even if its the same error).
A more robust way is to test for the class of the error, if it has one.
You can learn more about custom conditions at
https://adv-r.hadley.nz/conditions.html#custom-conditions, but in
short, errors are S3 classes and you can generate a custom class and check
for it using class instead of regexp.
If you are using expect_error() to check that an error message is
formatted in such a way that it makes sense to a human, we recommend
using expect_snapshot() instead.
See Also
expect_no_error(), expect_no_warning(),
expect_no_message(), and expect_no_condition() to assert
that code runs without errors/warnings/messages/conditions.
Other expectations:
comparison-expectations,
equality-expectations,
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
# Errors ------------------------------------------------------------------
f <- function() stop("My error!")
expect_error(f())
expect_error(f(), "My error!")
# You can use the arguments of grepl to control the matching
expect_error(f(), "my error!", ignore.case = TRUE)
# Note that `expect_error()` returns the error object so you can test
# its components if needed
err <- expect_error(rlang::abort("a", n = 10))
expect_equal(err$n, 10)
# Warnings ------------------------------------------------------------------
f <- function(x) {
if (x < 0) {
warning("*x* is already negative")
return(x)
}
-x
}
expect_warning(f(-1))
expect_warning(f(-1), "already negative")
expect_warning(f(1), NA)
# To test message and output, store results to a variable
expect_warning(out <- f(-1), "already negative")
expect_equal(out, -1)
# Messages ------------------------------------------------------------------
f <- function(x) {
if (x < 0) {
message("*x* is already negative")
return(x)
}
-x
}
expect_message(f(-1))
expect_message(f(-1), "already negative")
expect_message(f(1), NA)
Do you expect the result to be (in)visible?
Description
Use this to test whether a function returns a visible or invisible output. Typically you'll use this to check that functions called primarily for their side-effects return their data argument invisibly.
Usage
expect_invisible(call, label = NULL)
expect_visible(call, label = NULL)
Arguments
call |
A function call. |
label |
Used to customise failure messages. For expert use only. |
Value
The evaluated call, invisibly.
Examples
expect_invisible(x <- 10)
expect_visible(x)
# Typically you'll assign the result of the expectation so you can
# also check that the value is as you expect.
greet <- function(name) {
message("Hi ", name)
invisible(name)
}
out <- expect_invisible(greet("Hadley"))
expect_equal(out, "Hadley")
Do you expect to inherit from this class?
Description
expect_is() is an older form that uses inherits() without checking
whether x is S3, S4, or neither. Instead, I'd recommend using
expect_type(), expect_s3_class(), or expect_s4_class() to more clearly
convey your intent.
Usage
expect_is(object, class, info = NULL, label = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
class |
Class name passed to |
3rd edition
expect_is() is formally deprecated in the 3rd edition.
Do you expect the results/output to equal a known value?
Description
For complex printed output and objects, it is often challenging to describe
exactly what you expect to see. expect_known_value() and
expect_known_output() provide a slightly weaker guarantee, simply
asserting that the values have not changed since the last time that you ran
them.
Usage
expect_known_output(
object,
file,
update = TRUE,
...,
info = NULL,
label = NULL,
print = FALSE,
width = 80
)
expect_known_value(
object,
file,
update = TRUE,
...,
info = NULL,
label = NULL,
version = 2
)
expect_known_hash(object, hash = NULL)
Arguments
file |
File path where known value/output will be stored. |
update |
Should the file be updated? Defaults to |
... |
Passed on to |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
print |
If |
width |
Number of characters per line of output. This does not
inherit from |
version |
The serialization format version to use. The default, 2, was the default format from R 1.4.0 to 3.5.3. Version 3 became the default from R 3.6.0 and can only be read by R versions 3.5.0 and higher. |
hash |
Known hash value. Leave empty and you'll be informed what to use in the test output. |
Details
These expectations should be used in conjunction with git, as otherwise
there is no way to revert to previous values. Git is particularly useful
in conjunction with expect_known_output() as the diffs will show you
exactly what has changed.
Note that known values updates will only be updated when running tests
interactively. R CMD check clones the package source so any changes to
the reference files will occur in a temporary directory, and will not be
synchronised back to the source package.
3rd edition
expect_known_output() and friends are deprecated in the 3rd edition;
please use expect_snapshot_output() and friends instead.
Examples
tmp <- tempfile()
# The first run always succeeds
expect_known_output(mtcars[1:10, ], tmp, print = TRUE)
# Subsequent runs will succeed only if the file is unchanged
# This will succeed:
expect_known_output(mtcars[1:10, ], tmp, print = TRUE)
## Not run:
# This will fail
expect_known_output(mtcars[1:9, ], tmp, print = TRUE)
## End(Not run)
Do you expect an object with this length or shape?
Description
expect_length() inspects the length() of an object; expect_shape()
inspects the "shape" (i.e. nrow(), ncol(), or dim()) of
higher-dimensional objects like data.frames, matrices, and arrays.
Usage
expect_length(object, n)
expect_shape(object, ..., nrow, ncol, dim)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
n |
Expected length. |
... |
Not used; used to force naming of other arguments. |
nrow, ncol |
|
dim |
Expected |
See Also
expect_vector() to make assertions about the "size" of a vector.
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
expect_length(1, 1)
expect_length(1:10, 10)
show_failure(expect_length(1:10, 1))
x <- matrix(1:9, nrow = 3)
expect_shape(x, nrow = 3)
show_failure(expect_shape(x, nrow = 4))
expect_shape(x, ncol = 3)
show_failure(expect_shape(x, ncol = 4))
expect_shape(x, dim = c(3, 3))
show_failure(expect_shape(x, dim = c(3, 4, 5)))
Deprecated numeric comparison functions
Description
These functions have been deprecated in favour of the more concise
expect_gt() and expect_lt().
Usage
expect_less_than(...)
expect_more_than(...)
Arguments
... |
All arguments passed on to |
Do you expect a string to match this pattern?
Description
Do you expect a string to match this pattern?
Usage
expect_match(
object,
regexp,
perl = FALSE,
fixed = FALSE,
...,
all = TRUE,
info = NULL,
label = NULL
)
expect_no_match(
object,
regexp,
perl = FALSE,
fixed = FALSE,
...,
all = TRUE,
info = NULL,
label = NULL
)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
regexp |
Regular expression to test against. |
perl |
logical. Should Perl-compatible regexps be used? |
fixed |
If |
... |
Arguments passed on to
|
all |
Should all elements of actual value match |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
Details
expect_match() checks if a character vector matches a regular expression,
powered by grepl().
expect_no_match() provides the complementary case, checking that a
character vector does not match a regular expression.
Functions
-
expect_no_match(): Check that a string doesn't match a regular expression.
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
expect_match("Testing is fun", "fun")
expect_match("Testing is fun", "f.n")
expect_no_match("Testing is fun", "horrible")
show_failure(expect_match("Testing is fun", "horrible"))
show_failure(expect_match("Testing is fun", "horrible", fixed = TRUE))
# Zero-length inputs always fail
show_failure(expect_match(character(), "."))
Do you expect a vector with (these) names?
Description
You can either check for the presence of names (leaving expected
blank), specific names (by supplying a vector of names), or absence of
names (with NULL).
Usage
expect_named(
object,
expected,
ignore.order = FALSE,
ignore.case = FALSE,
info = NULL,
label = NULL
)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
expected |
Character vector of expected names. Leave missing to
match any names. Use |
ignore.order |
If |
ignore.case |
If |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
x <- c(a = 1, b = 2, c = 3)
expect_named(x)
expect_named(x, c("a", "b", "c"))
# Use options to control sensitivity
expect_named(x, c("B", "C", "A"), ignore.order = TRUE, ignore.case = TRUE)
# Can also check for the absence of names with NULL
z <- 1:4
expect_named(z, NULL)
Do you expect the absence of errors, warnings, messages, or other conditions?
Description
These expectations are the opposite of expect_error(),
expect_warning(), expect_message(), and expect_condition(). They
assert the absence of an error, warning, or message, respectively.
Usage
expect_no_error(object, ..., message = NULL, class = NULL)
expect_no_warning(object, ..., message = NULL, class = NULL)
expect_no_message(object, ..., message = NULL, class = NULL)
expect_no_condition(object, ..., message = NULL, class = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
... |
These dots are for future extensions and must be empty. |
message, class |
The default, In many cases, particularly when testing warnings and messages, you will
want to be more specific about the condition you are hoping not to see,
i.e. the condition that motivated you to write the test. Similar to
Note that you should only use |
Examples
expect_no_warning(1 + 1)
foo <- function(x) {
warning("This is a problem!")
}
# warning doesn't match so bubbles up:
expect_no_warning(foo(), message = "bananas")
# warning does match so causes a failure:
try(expect_no_warning(foo(), message = "problem"))
Test for absence of success or failure
Description
These functions are deprecated because expect_success() and
expect_failure() now test for exactly one success or no failures, and
exactly one failure and no successes.
Usage
expect_no_success(expr)
expect_no_failure(expr)
Do you expect NULL?
Description
This is a special case because NULL is a singleton so it's possible
check for it either with expect_equal(x, NULL) or expect_type(x, "NULL").
Usage
expect_null(object, info = NULL, label = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
x <- NULL
y <- 10
expect_null(x)
show_failure(expect_null(y))
Do you expect printed output to match this pattern?
Description
Test for output produced by print() or cat(). This is best used for
very simple output; for more complex cases use expect_snapshot().
Usage
expect_output(
object,
regexp = NULL,
...,
info = NULL,
label = NULL,
width = 80
)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
regexp |
Regular expression to test against.
|
... |
Arguments passed on to
|
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
width |
Number of characters per line of output. This does not
inherit from |
Value
The first argument, invisibly.
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_reference(),
expect_silent(),
inheritance-expectations,
logical-expectations
Examples
str(mtcars)
expect_output(str(mtcars), "32 obs")
expect_output(str(mtcars), "11 variables")
# You can use the arguments of grepl to control the matching
expect_output(str(mtcars), "11 VARIABLES", ignore.case = TRUE)
expect_output(str(mtcars), "$ mpg", fixed = TRUE)
Do you expect the output/result to equal a known good value?
Description
expect_output_file() behaves identically to expect_known_output().
Usage
expect_output_file(
object,
file,
update = TRUE,
...,
info = NULL,
label = NULL,
print = FALSE,
width = 80
)
3rd edition
expect_output_file() is deprecated in the 3rd edition;
please use expect_snapshot_output() and friends instead.
Do you expect a reference to this object?
Description
expect_reference() compares the underlying memory addresses of
two symbols. It is for expert use only.
Usage
expect_reference(
object,
expected,
info = NULL,
label = NULL,
expected.label = NULL
)
Arguments
object, expected |
Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label, expected.label |
Used to customise failure messages. For expert use only. |
3rd edition
expect_reference() is deprecated in the third edition. If you know what
you're doing, and you really need this behaviour, just use is_reference()
directly: expect_true(rlang::is_reference(x, y)).
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_silent(),
inheritance-expectations,
logical-expectations
Do you expect a vector containing these values?
Description
-
expect_setequal(x, y)tests that every element ofxoccurs iny, and that every element ofyoccurs inx. -
expect_contains(x, y)tests thatxcontains every element ofy(i.e.yis a subset ofx). -
expect_in(x, y)tests that every element ofxis iny(i.e.xis a subset ofy). -
expect_disjoint(x, y)tests that no element ofxis iny(i.e.xis disjoint fromy). -
expect_mapequal(x, y)treats lists as if they are mappings between names and values. Concretely, checks thatxandyhave the same names, then checks thatx[names(y)]equalsy.
Usage
expect_setequal(object, expected)
expect_mapequal(object, expected)
expect_contains(object, expected)
expect_in(object, expected)
expect_disjoint(object, expected)
Arguments
object, expected |
Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
Details
Note that expect_setequal() ignores names, and you will be warned if both
object and expected have them.
Examples
expect_setequal(letters, rev(letters))
show_failure(expect_setequal(letters[-1], rev(letters)))
x <- list(b = 2, a = 1)
expect_mapequal(x, list(a = 1, b = 2))
show_failure(expect_mapequal(x, list(a = 1)))
show_failure(expect_mapequal(x, list(a = 1, b = "x")))
show_failure(expect_mapequal(x, list(a = 1, b = 2, c = 3)))
Do you expect code to execute silently?
Description
Checks that the code produces no output, messages, or warnings.
Usage
expect_silent(object)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
Value
The first argument, invisibly.
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
inheritance-expectations,
logical-expectations
Examples
expect_silent("123")
f <- function() {
message("Hi!")
warning("Hey!!")
print("OY!!!")
}
## Not run:
expect_silent(f())
## End(Not run)
Do you expect this code to run the same way as last time?
Description
Snapshot tests (aka golden tests) are similar to unit tests except that the
expected result is stored in a separate file that is managed by testthat.
Snapshot tests are useful for when the expected value is large, or when
the intent of the code is something that can only be verified by a human
(e.g. this is a useful error message). Learn more in
vignette("snapshotting").
expect_snapshot() runs code as if you had executed it at the console, and
records the results, including output, messages, warnings, and errors.
If you just want to compare the result, try expect_snapshot_value().
Usage
expect_snapshot(
x,
cran = FALSE,
error = FALSE,
transform = NULL,
variant = NULL,
cnd_class = FALSE
)
Arguments
x |
Code to evaluate. |
cran |
Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies. |
error |
Do you expect the code to throw an error? The expectation will fail (even on CRAN) if an unexpected error is thrown or the expected error is not thrown. |
transform |
Optionally, a function to scrub sensitive or stochastic text from the output. Should take a character vector of lines as input and return a modified character vector as output. |
variant |
If non- You can use variants to deal with cases where the snapshot output varies and you want to capture and test the variations. Common use cases include variations for operating system, R version, or version of key dependency. Variants are an advanced feature. When you use them, you'll need to carefully think about your testing strategy to ensure that all important variants are covered by automated tests, and ensure that you have a way to get snapshot changes out of your CI system and back into the repo. Note that there's no way to declare all possible variants up front which means that as soon as you start using variants, you are responsible for deleting snapshot variants that are no longer used. (testthat will still delete all variants if you delete the test.) |
cnd_class |
Whether to include the class of messages,
warnings, and errors in the snapshot. Only the most specific
class is included, i.e. the first element of |
Workflow
The first time that you run a snapshot expectation it will run x,
capture the results, and record them in tests/testthat/_snaps/{test}.md.
Each test file gets its own snapshot file, e.g. test-foo.R will get
_snaps/foo.md.
It's important to review the Markdown files and commit them to git. They are designed to be human readable, and you should always review new additions to ensure that the salient information has been captured. They should also be carefully reviewed in pull requests, to make sure that snapshots have updated in the expected way.
On subsequent runs, the result of x will be compared to the value stored
on disk. If it's different, the expectation will fail, and a new file
_snaps/{test}.new.md will be created. If the change was deliberate,
you can approve the change with snapshot_accept() and then the tests will
pass the next time you run them.
Note that snapshotting can only work when executing a complete test file
(with test_file(), test_dir(), or friends) because there's otherwise
no way to figure out the snapshot path. If you run snapshot tests
interactively, they'll just display the current value.
Do you expect this code to create the same file as last time?
Description
Whole file snapshot testing is designed for testing objects that don't have
a convenient textual representation, with initial support for images
(.png, .jpg, .svg), data frames (.csv), and text files
(.R, .txt, .json, ...).
The first time expect_snapshot_file() is run, it will create
_snaps/{test}/{name}.{ext} containing reference output. Future runs will
be compared to this reference: if different, the test will fail and the new
results will be saved in _snaps/{test}/{name}.new.{ext}. To review
failures, call snapshot_review().
We generally expect this function to be used via a wrapper that takes care of ensuring that output is as reproducible as possible, e.g. automatically skipping tests where it's known that images can't be reproduced exactly.
Usage
expect_snapshot_file(
path,
name = basename(path),
binary = deprecated(),
cran = FALSE,
compare = NULL,
transform = NULL,
variant = NULL
)
announce_snapshot_file(path, name = basename(path))
compare_file_binary(old, new)
compare_file_text(old, new)
Arguments
path |
Path to file to snapshot. Optional for
|
name |
Snapshot name, taken from |
binary |
|
cran |
Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies. |
compare |
A function used to compare the snapshot files. It should take
two inputs, the paths to the
|
transform |
Optionally, a function to scrub sensitive or stochastic text from the output. Should take a character vector of lines as input and return a modified character vector as output. |
variant |
If not- Note that there's no way to declare all possible variants up front which means that as soon as you start using variants, you are responsible for deleting snapshot variants that are no longer used. (testthat will still delete all variants if you delete the test.) |
old, new |
Paths to old and new snapshot files. |
Announcing snapshots
testthat automatically detects dangling snapshots that have been
written to the _snaps directory but which no longer have
corresponding R code to generate them. These dangling files are
automatically deleted so they don't clutter the snapshot
directory.
This can cause problems if your test is conditionally executed, either
because of an if statement or a skip(). To avoid files being deleted in
this case, you can call announce_snapshot_file() before the conditional
code.
test_that("can save a file", {
if (!can_save()) {
announce_snapshot_file(name = "data.txt")
skip("Can't save file")
}
path <- withr::local_tempfile()
expect_snapshot_file(save_file(path, mydata()), "data.txt")
})
Examples
# To use expect_snapshot_file() you'll typically need to start by writing
# a helper function that creates a file from your code, returning a path
save_png <- function(code, width = 400, height = 400) {
path <- tempfile(fileext = ".png")
png(path, width = width, height = height)
on.exit(dev.off())
code
path
}
path <- save_png(plot(1:5))
path
## Not run:
expect_snapshot_file(save_png(hist(mtcars$mpg)), "plot.png")
## End(Not run)
# You'd then also provide a helper that skips tests where you can't
# be sure of producing exactly the same output.
expect_snapshot_plot <- function(name, code) {
# Announce the file before touching skips or running `code`. This way,
# if the skips are active, testthat will not auto-delete the corresponding
# snapshot file.
name <- paste0(name, ".png")
announce_snapshot_file(name = name)
# Other packages might affect results
skip_if_not_installed("ggplot2", "2.0.0")
# Or maybe the output is different on some operating systems
skip_on_os("windows")
# You'll need to carefully think about and experiment with these skips
path <- save_png(code)
expect_snapshot_file(path, name)
}
Snapshot helpers
Description
These snapshotting functions are questioning because they were developed
before expect_snapshot() and we're not sure that they still have a
role to play.
-
expect_snapshot_output()captures just output printed to the console. -
expect_snapshot_error()captures an error message and optionally checks its class. -
expect_snapshot_warning()captures a warning message and optionally checks its class.
Usage
expect_snapshot_output(x, cran = FALSE, variant = NULL)
expect_snapshot_error(x, class = "error", cran = FALSE, variant = NULL)
expect_snapshot_warning(x, class = "warning", cran = FALSE, variant = NULL)
Arguments
x |
Code to evaluate. |
cran |
Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies. |
variant |
If non- You can use variants to deal with cases where the snapshot output varies and you want to capture and test the variations. Common use cases include variations for operating system, R version, or version of key dependency. Variants are an advanced feature. When you use them, you'll need to carefully think about your testing strategy to ensure that all important variants are covered by automated tests, and ensure that you have a way to get snapshot changes out of your CI system and back into the repo. Note that there's no way to declare all possible variants up front which means that as soon as you start using variants, you are responsible for deleting snapshot variants that are no longer used. (testthat will still delete all variants if you delete the test.) |
class |
Class of expected error or warning. The expectation will
always fail (even on CRAN) if an error of this class isn't seen
when executing |
Do you expect this code to return the same value as last time?
Description
Captures the result of function, flexibly serializing it into a text
representation that's stored in a snapshot file. See expect_snapshot()
for more details on snapshot testing.
Usage
expect_snapshot_value(
x,
style = c("json", "json2", "deparse", "serialize"),
cran = FALSE,
tolerance = testthat_tolerance(),
...,
variant = NULL
)
Arguments
x |
Code to evaluate. |
style |
Serialization style to use:
|
cran |
Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies. |
tolerance |
Numerical tolerance: any differences (in the sense of
The default tolerance is |
... |
Passed on to |
variant |
If non- You can use variants to deal with cases where the snapshot output varies and you want to capture and test the variations. Common use cases include variations for operating system, R version, or version of key dependency. Variants are an advanced feature. When you use them, you'll need to carefully think about your testing strategy to ensure that all important variants are covered by automated tests, and ensure that you have a way to get snapshot changes out of your CI system and back into the repo. Note that there's no way to declare all possible variants up front which means that as soon as you start using variants, you are responsible for deleting snapshot variants that are no longer used. (testthat will still delete all variants if you delete the test.) |
Test your custom expectations
Description
expect_success() checks that there's exactly one success and no failures;
expect_failure() checks that there's exactly one failure and no successes.
expect_snapshot_failure() records the failure message so that you can
manually check that it is informative.
Use show_failure() in examples to print the failure message without
throwing an error.
Usage
expect_success(expr)
expect_failure(expr, message = NULL, ...)
expect_snapshot_failure(expr)
show_failure(expr)
Arguments
expr |
Code to evaluate |
message |
Check that the failure message matches this regexp. |
... |
Other arguments passed on to |
Expect that a condition holds.
Description
An old style of testing that's no longer encouraged.
Usage
expect_that(object, condition, info = NULL, label = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
condition |
a function that returns whether or not the condition is met, and if not, an error message to display. |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
Value
the (internal) expectation result as an invisible list
3rd edition
This style of testing is formally deprecated as of the 3rd edition.
Use a more specific expect_ function instead.
See Also
fail() for an expectation that always fails.
Examples
expect_that(5 * 2, equals(10))
expect_that(sqrt(2) ^ 2, equals(2))
## Not run:
expect_that(sqrt(2) ^ 2, is_identical_to(2))
## End(Not run)
Do you expect a vector with this size and/or prototype?
Description
expect_vector() is a thin wrapper around vctrs::vec_assert(), converting
the results of that function in to the expectations used by testthat. This
means that it used the vctrs of ptype (prototype) and size. See
details in https://vctrs.r-lib.org/articles/type-size.html
Usage
expect_vector(object, ptype = NULL, size = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
ptype |
(Optional) Vector prototype to test against. Should be a size-0 (empty) generalised vector. |
size |
(Optional) Size to check for. |
Examples
expect_vector(1:10, ptype = integer(), size = 10)
show_failure(expect_vector(1:10, ptype = integer(), size = 5))
show_failure(expect_vector(1:10, ptype = character(), size = 5))
Expectation conditions
Description
new_expectation() creates an expectation condition object and
exp_signal() signals it. expectation() does both. is.expectation()
tests if a captured condition is a testthat expectation.
These functions are primarily for internal use. If you are creating your
own expectation, you do not need these functions are instead should use
pass() or fail(). See vignette("custom-expectation") for more
details.
Usage
expectation(type, message, ..., srcref = NULL, trace = NULL)
new_expectation(
type,
message,
...,
srcref = NULL,
trace = NULL,
.subclass = NULL
)
exp_signal(exp)
is.expectation(x)
Arguments
type |
Expectation type. Must be one of "success", "failure", "error", "skip", "warning". |
message |
Message describing test failure |
... |
Additional attributes for the expectation object. |
srcref |
Optional |
trace |
An optional backtrace created by |
.subclass |
An optional subclass for the expectation object. |
exp |
An expectation object, as created by
|
x |
object to test for class membership |
Extract a reprex from a failed expectation
Description
extract_test() creates a minimal reprex for a failed expectation.
It extracts all non-test code before the failed expectation as well as
all code inside the test up to and including the failed expectation.
This is particularly useful when you're debugging test failures in someone else's package.
Usage
extract_test(location, path = stdout(), package = Sys.getenv("TESTTHAT_PKG"))
Arguments
location |
A string giving the location in the form
|
path |
Path to write the reprex to. Defaults to |
package |
If supplied, will be used to construct a test environment for the extracted code. |
Value
This function is called for its side effect of rendering a
reprex to path. This function will never error: if extraction
fails, the error message will be written to path.
Examples
# If you see a test failure like this:
# -- Failure (test-extract.R:46:3): errors if can't find test -------------
# Expected FALSE to be TRUE.
# Differences:
# `actual`: FALSE
# `expected`: TRUE
# You can run this:
## Not run: extract_test("test-extract.R:46:3")
# to see just the code needed to reproduce the failure
Declare that an expectation either passes or fails
Description
These are the primitives that you can use to implement your own expectations.
Every path through an expectation should either call pass(), fail(),
or throw an error (e.g. if the arguments are invalid). Expectations should
always return invisible(act$val).
Learn more about creating your own expectations in
vignette("custom-expectation").
Usage
fail(
message = "Failure has been forced",
info = NULL,
srcref = NULL,
trace_env = caller_env(),
trace = NULL
)
pass()
Arguments
message |
A character vector describing the failure. The first element should describe the expected value, and the second (and optionally subsequence) elements should describe what was actually seen. |
info |
Character vector continuing additional information. Included for backward compatibility only and new expectations should not use it. |
srcref |
Location of the failure. Should only needed to be explicitly supplied when you need to forward a srcref captured elsewhere. |
trace_env |
If |
trace |
An optional backtrace created by |
Examples
expect_length <- function(object, n) {
act <- quasi_label(rlang::enquo(object), arg = "object")
act_n <- length(act$val)
if (act_n != n) {
fail(sprintf("%s has length %i, not length %i.", act$lab, act_n, n))
} else {
pass()
}
invisible(act$val)
}
Find reporter object given name or object.
Description
If not found, will return informative error message. Pass a character vector to create a MultiReporter composed of individual reporters. Will return null if given NULL.
Usage
find_reporter(reporter)
Arguments
reporter |
name of reporter(s), or reporter object(s) |
Find test files
Description
Find test files
Usage
find_test_scripts(
path,
filter = NULL,
invert = FALSE,
...,
full.names = TRUE,
start_first = NULL
)
Arguments
path |
path to tests |
filter |
If not |
invert |
If |
... |
Additional arguments passed to |
start_first |
A character vector of file patterns (globs, see
|
Value
A character vector of paths
Do you expect an S3/S4/R6/S7 object that inherits from this class?
Description
See https://adv-r.hadley.nz/oo.html for an overview of R's OO systems, and the vocabulary used here.
-
expect_type(x, type)checks thattypeof(x)istype. -
expect_s3_class(x, class)checks thatxis an S3 object thatinherits()fromclass -
expect_s3_class(x, NA)checks thatxisn't an S3 object. -
expect_s4_class(x, class)checks thatxis an S4 object thatis()class. -
expect_s4_class(x, NA)checks thatxisn't an S4 object. -
expect_r6_class(x, class)checks thatxan R6 object that inherits fromclass. -
expect_s7_class(x, Class)checks thatxis an S7 object thatS7::S7_inherits()fromClass
See expect_vector() for testing properties of objects created by vctrs.
Usage
expect_type(object, type)
expect_s3_class(object, class, exact = FALSE)
expect_s4_class(object, class)
expect_r6_class(object, class)
expect_s7_class(object, class)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
type |
String giving base type (as returned by |
class |
The required type varies depending on the function:
For historical reasons, |
exact |
If |
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
logical-expectations
Examples
x <- data.frame(x = 1:10, y = "x", stringsAsFactors = TRUE)
# A data frame is an S3 object with class data.frame
expect_s3_class(x, "data.frame")
show_failure(expect_s4_class(x, "data.frame"))
# A data frame is built from a list:
expect_type(x, "list")
f <- factor(c("a", "b", "c"))
o <- ordered(f)
# Using multiple class names tests if the object inherits from any of them
expect_s3_class(f, c("ordered", "factor"))
# Use exact = TRUE to test for exact match
show_failure(expect_s3_class(f, c("ordered", "factor"), exact = TRUE))
expect_s3_class(o, c("ordered", "factor"), exact = TRUE)
# An integer vector is an atomic vector of type "integer"
expect_type(x$x, "integer")
# It is not an S3 object
show_failure(expect_s3_class(x$x, "integer"))
# Above, we requested data.frame() converts strings to factors:
show_failure(expect_type(x$y, "character"))
expect_s3_class(x$y, "factor")
expect_type(x$y, "integer")
Is an error informative?
Description
is_informative_error() is a generic predicate that indicates
whether testthat users should explicitly test for an error
class. Since we no longer recommend you do that, this generic
has been deprecated.
Usage
is_informative_error(x, ...)
Arguments
x |
An error object. |
... |
These dots are for future extensions and must be empty. |
Details
A few classes are hard-coded as uninformative:
-
simpleError -
rlang_errorunless a subclass is detected -
Rcpp::eval_error -
Rcpp::exception
Determine testing status
Description
These functions help you determine if you code is running in a particular testing context:
-
is_testing()isTRUEinside a test. -
is_snapshot()isTRUEinside a snapshot test -
is_checking()isTRUEinside ofR CMD check(i.e. bytest_check()). -
is_parallel()isTRUEif the tests are run in parallel. -
testing_package()gives name of the package being tested.
A common use of these functions is to compute a default value for a quiet
argument with is_testing() && !is_snapshot(). In this case, you'll
want to avoid an run-time dependency on testthat, in which case you should
just copy the implementation of these functions into a utils.R or similar.
Usage
is_testing()
is_parallel()
is_checking()
is_snapshot()
testing_package()
Temporarily change the active testthat edition
Description
local_edition() allows you to temporarily (within a single test or
a single test file) change the active edition of testthat.
edition_get() allows you to retrieve the currently active edition.
Usage
local_edition(x, .env = parent.frame())
edition_get()
Arguments
x |
Edition Should be a single integer. |
.env |
Environment that controls scope of changes. For expert use only. |
Temporarily redefine function definitions
Description
with_mocked_bindings() and local_mocked_bindings() provide tools for
"mocking", temporarily redefining a function so that it behaves differently
during tests. This is helpful for testing functions that depend on external
state (i.e. reading a value from a file or a website, or pretending a package
is or isn't installed).
Learn more in vignette("mocking").
Usage
local_mocked_bindings(..., .package = NULL, .env = caller_env())
with_mocked_bindings(code, ..., .package = NULL)
Arguments
... |
Name-value pairs providing new values (typically functions) to temporarily replace the named bindings. |
.package |
The name of the package where mocked functions should be
inserted. Generally, you should not supply this as it will be automatically
detected when whole package tests are run or when there's one package
under active development (i.e. loaded with |
.env |
Environment that defines effect scope. For expert use only. |
code |
Code to execute with specified bindings. |
Use
There are four places that the function you are trying to mock might come from:
Internal to your package.
Imported from an external package via the
NAMESPACE.The base environment.
Called from an external package with
::.
They are described in turn below.
(To mock S3 & S4 methods and R6 classes see local_mocked_s3_method(),
local_mocked_s4_method(), and local_mocked_r6_class().)
Internal & imported functions
You mock internal and imported functions the same way. For example, take this code:
some_function <- function() {
another_function()
}
It doesn't matter whether another_function() is defined by your package
or you've imported it from a dependency with @import or @importFrom,
you mock it the same way:
local_mocked_bindings( another_function = function(...) "new_value" )
Base functions
To mock a function in the base package, you need to make sure that you
have a binding for this function in your package. It's easiest to do this
by binding the value to NULL. For example, if you wanted to mock
interactive() in your package, you'd need to include this code somewhere
in your package:
interactive <- NULL
Why is this necessary? with_mocked_bindings() and local_mocked_bindings()
work by temporarily modifying the bindings within your package's namespace.
When these tests are running inside of R CMD check the namespace is locked
which means it's not possible to create new bindings so you need to make sure
that the binding exists already.
Namespaced calls
It's trickier to mock functions in other packages that you call with ::.
For example, take this minor variation:
some_function <- function() {
anotherpackage::another_function()
}
To mock this function, you'd need to modify another_function() inside the
anotherpackage package. You can do this by supplying the .package
argument to local_mocked_bindings() but we don't recommend it because
it will affect all calls to anotherpackage::another_function(), not just
the calls originating in your package. Instead, it's safer to either import
the function into your package, or make a wrapper that you can mock:
some_function <- function() {
my_wrapper()
}
my_wrapper <- function(...) {
anotherpackage::another_function(...)
}
local_mocked_bindings(
my_wrapper = function(...) "new_value"
)
Multiple return values / sequence of outputs
To mock a function that returns different values in sequence,
for instance an API call whose status would be 502 then 200,
or an user input to readline(), you can use mock_output_sequence()
local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))
See Also
Other mocking:
mock_output_sequence()
Mock an R6 class
Description
This function allows you to temporarily override an R6 class definition.
It works by creating a subclass then using local_mocked_bindings() to
temporarily replace the original definition. This means that it will not
affect subclasses of the original class; please file an issue if you need
this.
Learn more about mocking in vignette("mocking").
Usage
local_mocked_r6_class(
class,
public = list(),
private = list(),
frame = caller_env()
)
Arguments
class |
An R6 class definition. |
public, private |
A named list of public and private methods/data. |
frame |
Calling frame which determines the scope of the mock. Only needed when wrapping in another local helper. |
Mock S3 and S4 methods
Description
These functions allow you to temporarily override S3 and S4 methods that
already exist. It works by using registerS3method()/setMethod() to
temporarily replace the original definition.
Learn more about mocking in vignette("mocking").
Usage
local_mocked_s3_method(generic, signature, definition, frame = caller_env())
local_mocked_s4_method(generic, signature, definition, frame = caller_env())
Arguments
generic |
A string giving the name of the generic. |
signature |
A character vector giving the signature of the method. |
definition |
A function providing the method definition. |
frame |
Calling frame which determines the scope of the mock. Only needed when wrapping in another local helper. |
Examples
x <- as.POSIXlt(Sys.time())
local({
local_mocked_s3_method("length", "POSIXlt", function(x) 42)
length(x)
})
length(x)
Instantiate local snapshotting context
Description
Needed if you want to run snapshot tests outside of the usual testthat framework For expert use only.
Usage
local_snapshotter(
reporter = SnapshotReporter,
snap_dir = "_snaps",
cleanup = FALSE,
desc = NULL,
fail_on_new = NULL,
frame = caller_env()
)
Temporarily set options for maximum reproducibility
Description
local_test_context() is run automatically by test_that() but you may
want to run it yourself if you want to replicate test results interactively.
If run inside a function, the effects are automatically reversed when the
function exits; if running in the global environment, use
withr::deferred_run() to undo.
local_reproducible_output() is run automatically by test_that() in the
3rd edition. You might want to call it to override the the default settings
inside a test, if you want to test Unicode, coloured output, or a
non-standard width.
Usage
local_test_context(.env = parent.frame())
local_reproducible_output(
width = 80,
crayon = FALSE,
unicode = FALSE,
rstudio = FALSE,
hyperlinks = FALSE,
lang = "C",
.env = parent.frame()
)
Arguments
.env |
Environment to use for scoping; expert use only. |
width |
Value of the |
crayon |
Determines whether or not crayon (now cli) colour should be applied. |
unicode |
Value of the |
rstudio |
Should we pretend that we're inside of RStudio? |
hyperlinks |
Should we use ANSI hyperlinks. |
lang |
Optionally, supply a BCP47 language code to set the language used for translating error messages. This is a lower case two letter ISO 639 country code, optionally followed by "_" or "-" and an upper case two letter ISO 3166 region code. |
Details
local_test_context() sets TESTTHAT = "true", which ensures that
is_testing() returns TRUE and allows code to tell if it is run by
testthat.
In the third edition, local_test_context() also calls
local_reproducible_output() which temporary sets the following options:
-
cli.dynamic = FALSEso that tests assume that they are not run in a dynamic console (i.e. one where you can move the cursor around). -
cli.unicode(default:FALSE) so that the cli package never generates unicode output (normally cli uses unicode on Linux/Mac but not Windows). Windows can't easily save unicode output to disk, so it must be set to false for consistency. -
cli.condition_width = Infso that new lines introduced while width-wrapping condition messages don't interfere with message matching. -
crayon.enabled(default:FALSE) suppresses ANSI colours generated by the cli and crayon packages (normally colours are used if cli detects that you're in a terminal that supports colour). -
cli.num_colors(default:1L) Same as the crayon option. -
lifecycle_verbosity = "warning"so that every lifecycle problem always generates a warning (otherwise deprecated functions don't generate a warning every time). -
max.print = 99999so the same number of values are printed. -
OutDec = "."so numbers always uses.as the decimal point (European users sometimes setOutDec = ","). -
rlang_interactive = FALSEso thatrlang::is_interactive()returnsFALSE, and code that uses it pretends you're in a non-interactive environment. -
useFancyQuotes = FALSEso base R functions always use regular (straight) quotes (otherwise the default is locale dependent, seesQuote()for details). -
width(default: 80) to control the width of printed output (usually this varies with the size of your console).
And modifies the following env vars:
Unsets
RSTUDIO, which ensures that RStudio is never detected as running.Sets
LANGUAGE = "en", which ensures that no message translation occurs.
Finally, it sets the collation locale to "C", which ensures that character sorting the same regardless of system locale.
Examples
local({
local_test_context()
cat(cli::col_blue("Text will not be colored"))
cat(cli::symbol$ellipsis)
cat("\n")
})
test_that("test ellipsis", {
local_reproducible_output(unicode = FALSE)
expect_equal(cli::symbol$ellipsis, "...")
local_reproducible_output(unicode = TRUE)
expect_equal(cli::symbol$ellipsis, "\u2026")
})
Locally set test directory options
Description
For expert use only.
Usage
local_test_directory(path, package = NULL, .env = parent.frame())
Arguments
path |
Path to directory of files |
package |
Optional package name, if known. |
Do you expect TRUE or FALSE?
Description
These are fall-back expectations that you can use when none of the other more specific expectations apply. The disadvantage is that you may get a less informative error message.
Attributes are ignored.
Usage
expect_true(object, info = NULL, label = NULL)
expect_false(object, info = NULL, label = NULL)
Arguments
object |
Object to test. Supports limited unquoting to make it easier to generate readable failures within a function or for loop. See quasi_label for more details. |
info |
Extra information to be included in the message. This argument is soft-deprecated and should not be used in new code. Instead see alternatives in quasi_label. |
label |
Used to customise failure messages. For expert use only. |
See Also
Other expectations:
comparison-expectations,
equality-expectations,
expect_error(),
expect_length(),
expect_match(),
expect_named(),
expect_null(),
expect_output(),
expect_reference(),
expect_silent(),
inheritance-expectations
Examples
expect_true(2 == 2)
# Failed expectations will throw an error
show_failure(expect_true(2 != 2))
# where possible, use more specific expectations, to get more informative
# error messages
a <- 1:4
show_failure(expect_true(length(a) == 3))
show_failure(expect_equal(length(a), 3))
x <- c(TRUE, TRUE, FALSE, TRUE)
show_failure(expect_true(all(x)))
show_failure(expect_all_true(x))
Make an equality test.
Description
This a convenience function to make a expectation that checks that input stays the same.
Usage
make_expectation(x, expectation = "equals")
Arguments
x |
a vector of values |
expectation |
the type of equality you want to test for
( |
Examples
x <- 1:10
make_expectation(x)
make_expectation(mtcars$mpg)
df <- data.frame(x = 2)
make_expectation(df)
Mock a sequence of output from a function
Description
Specify multiple return values for mocking
Usage
mock_output_sequence(..., recycle = FALSE)
Arguments
... |
< |
recycle |
whether to recycle. If |
Value
A function that you can use within local_mocked_bindings() and
with_mocked_bindings()
See Also
Other mocking:
local_mocked_bindings()
Examples
# inside local_mocked_bindings()
## Not run:
local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))
## End(Not run)
# for understanding
mocked_sequence <- mock_output_sequence("3", "This is a note", "n")
mocked_sequence()
mocked_sequence()
mocked_sequence()
try(mocked_sequence())
recycled_mocked_sequence <- mock_output_sequence(
"3", "This is a note", "n",
recycle = TRUE
)
recycled_mocked_sequence()
recycled_mocked_sequence()
recycled_mocked_sequence()
recycled_mocked_sequence()
Negate an expectation
Description
This negates an expectation, making it possible to express that you want the opposite of a standard expectation. This function is deprecated and will be removed in a future version.
Usage
not(f)
Arguments
f |
an existing expectation function |
Old-style expectations.
Description
Initial testthat used a style of testing that looked like
expect_that(a, equals(b))) this allowed expectations to read like
English sentences, but was verbose and a bit too cutesy. This style
will continue to work but has been soft-deprecated - it is no longer
documented, and new expectations will only use the new style
expect_equal(a, b).
Usage
is_a(class)
has_names(expected, ignore.order = FALSE, ignore.case = FALSE)
is_less_than(expected, label = NULL, ...)
is_more_than(expected, label = NULL, ...)
equals(expected, label = NULL, ...)
is_equivalent_to(expected, label = NULL)
is_identical_to(expected, label = NULL)
equals_reference(file, label = NULL, ...)
shows_message(regexp = NULL, all = FALSE, ...)
gives_warning(regexp = NULL, all = FALSE, ...)
prints_text(regexp = NULL, ...)
throws_error(regexp = NULL, ...)
Quasi-labelling
Description
The first argument to every expect_ function can use unquoting to
construct better labels. This makes it easy to create informative labels when
expectations are used inside a function or a for loop. quasi_label() wraps
up the details, returning the expression and label.
Usage
quasi_label(quo, label = NULL, arg = NULL)
Arguments
quo |
A quosure created by |
label |
An optional label to override the default. This is
only provided for internal usage. Modern expectations should not
include a |
arg |
Argument name shown in error message if |
Value
A list containing two elements:
val |
The evaluate value of |
lab |
The quasiquoted label generated from |
Limitations
Because all expect_ function use unquoting to generate more informative
labels, you can not use unquoting for other purposes. Instead, you'll need
to perform all other unquoting outside of the expectation and only test
the results.
Examples
f <- function(i) if (i > 3) i * 9 else i * 10
i <- 10
# This sort of expression commonly occurs inside a for loop or function
# And the failure isn't helpful because you can't see the value of i
# that caused the problem:
show_failure(expect_equal(f(i), i * 10))
# To overcome this issue, testthat allows you to unquote expressions using
# !!. This causes the failure message to show the value rather than the
# variable name
show_failure(expect_equal(f(!!i), !!(i * 10)))
Objects exported from other packages
Description
These objects are imported from other packages. Follow the links below to see their documentation.
- magrittr
Get and set active reporter.
Description
get_reporter() and set_reporter() access and modify the current "active"
reporter. Generally, these functions should not be called directly; instead
use with_reporter() to temporarily change, then reset, the active reporter.
Usage
set_reporter(reporter)
get_reporter()
with_reporter(reporter, code, start_end_reporter = TRUE)
Arguments
reporter |
Reporter to use to summarise output. Can be supplied
as a string (e.g. "summary") or as an R6 object
(e.g. See Reporter for more details and a list of built-in reporters. |
code |
Code to execute. |
start_end_reporter |
Should the reporters |
Value
with_reporter() invisible returns the reporter active when code
was evaluated.
Set maximum number of test failures allowed before aborting the run
Description
This sets the TESTTHAT_MAX_FAILS env var which will affect both the
current R process and any processes launched from it.
Usage
set_max_fails(n)
Arguments
n |
Maximum number of failures allowed. |
Check for global state changes
Description
One of the most pernicious challenges to debug is when a test runs fine in your test suite, but fails when you run it interactively (or similarly, it fails randomly when running your tests in parallel). One of the most common causes of this problem is accidentally changing global state in a previous test (e.g. changing an option, an environment variable, or the working directory). This is hard to debug, because it's very hard to figure out which test made the change.
Luckily testthat provides a tool to figure out if tests are changing global
state. You can register a state inspector with set_state_inspector() and
testthat will run it before and after each test, store the results, then
report if there are any differences. For example, if you wanted to see if
any of your tests were changing options or environment variables, you could
put this code in tests/testthat/helper-state.R:
set_state_inspector(function() {
list(
options = options(),
envvars = Sys.getenv()
)
})
(You might discover other packages outside your control are changing the global state, in which case you might want to modify this function to ignore those values.)
Other problems that can be troublesome to resolve are CRAN check notes that report things like connections being left open. You can easily debug that problem with:
set_state_inspector(function() {
getAllConnections()
})
Usage
set_state_inspector(callback, tolerance = testthat_tolerance())
Arguments
callback |
Either a zero-argument function that returns an object
capturing global state that you're interested in, or |
tolerance |
If non- It uses the same algorithm as |
Simulate a test environment
Description
This function is designed to allow you to simulate testthat's testing environment in an interactive session. To undo it's affect, you will need to restart your R session.
Usage
simulate_test_env(package, path)
Arguments
package |
Name of installed package. |
path |
Path to |
Skip a test for various reasons
Description
skip_if() and skip_if_not() allow you to skip tests, immediately
concluding a test_that() block without executing any further expectations.
This allows you to skip a test without failure, if for some reason it
can't be run (e.g. it depends on the feature of a specific operating system,
or it requires a specific version of a package).
See vignette("skipping") for more details.
Usage
skip(message = "Skipping")
skip_if_not(condition, message = NULL)
skip_if(condition, message = NULL)
skip_if_not_installed(pkg, minimum_version = NULL)
skip_unless_r(spec)
skip_if_offline(host = "captive.apple.com")
skip_on_cran()
local_on_cran(on_cran = TRUE, frame = caller_env())
skip_on_os(os, arch = NULL)
skip_on_ci()
skip_on_covr()
skip_on_bioc()
skip_if_translated(msgid = "'%s' not found")
Arguments
message |
A message describing why the test was skipped. |
condition |
Boolean condition to check. |
pkg |
Name of package to check for |
minimum_version |
Minimum required version for the package |
spec |
A version specification like '>= 4.1.0' denoting that this test should only be run on R versions 4.1.0 and later. |
host |
A string with a hostname to lookup |
on_cran |
Pretend we're on CRAN ( |
frame |
Calling frame to tie change to; expect use only. |
os |
Character vector of one or more operating systems to skip on.
Supported values are |
arch |
Character vector of one or more architectures to skip on.
Common values include |
msgid |
R message identifier used to check for translation: the default
uses a message included in most translation packs. See the complete list in
|
Helpers
-
skip_if_not_installed("pkg")skips tests if package "pkg" is not installed or cannot be loaded (usingrequireNamespace()). Generally, you can assume that suggested packages are installed, and you do not need to check for them specifically, unless they are particularly difficult to install. -
skip_if_offline()skips if an internet connection is not available (usingcurl::nslookup()) or if the test is run on CRAN. Requires {curl} to be installed and included in the dependencies of your package. -
skip_if_translated("msg")skips tests if the "msg" is translated. -
skip_on_bioc()skips on Bioconductor (using theIS_BIOC_BUILD_MACHINEenv var). -
skip_on_cran()skips on CRAN (using theNOT_CRANenv var set by devtools and friends).local_on_cran()gives you the ability to easily simulate what will happen on CRAN. -
skip_on_covr()skips when covr is running (using theR_COVRenv var). -
skip_on_ci()skips on continuous integration systems like GitHub Actions, travis, and appveyor (using theCIenv var). -
skip_on_os()skips on the specified operating system(s) ("windows", "mac", "linux", or "solaris").
Examples
if (FALSE) skip("Some Important Requirement is not available")
test_that("skip example", {
expect_equal(1, 1L) # this expectation runs
skip('skip')
expect_equal(1, 2) # this one skipped
expect_equal(1, 3) # this one is also skipped
})
Superseded skip functions
Description
-
skip_on_travis()andskip_on_appveyor()have been superseded byskip_on_ci().
Usage
skip_on_travis()
skip_on_appveyor()
Accept or reject modified snapshots
Description
-
snapshot_accept()accepts all modified snapshots. -
snapshot_reject()rejects all modified snapshots by deleting the.newvariants. -
snapshot_review()opens a Shiny app that shows a visual diff of each modified snapshot. This is particularly useful for whole file snapshots created byexpect_snapshot_file().
Usage
snapshot_accept(files = NULL, path = "tests/testthat")
snapshot_reject(files = NULL, path = "tests/testthat")
snapshot_review(files = NULL, path = "tests/testthat", ...)
Arguments
files |
Optionally, filter effects to snapshots from specified files.
This can be a snapshot name (e.g. |
path |
Path to tests. |
... |
Additional arguments passed on to |
Download snapshots from GitHub
Description
If your snapshots fail on GitHub, it can be a pain to figure out exactly why, or to incorporate them into your local package. This function makes it easy, only requiring you to interactively select which job you want to take the artifacts from.
Note that you should not generally need to use this function manually; instead copy and paste from the hint emitted on GitHub.
Usage
snapshot_download_gh(repository, run_id, dest_dir = ".")
Arguments
repository |
Repository owner/name, e.g. |
run_id |
Run ID, e.g. |
dest_dir |
Directory to download to. Defaults to the current directory. |
Source a file, directory of files, or various important subsets
Description
These are used by test_dir() and friends
Usage
source_file(
path,
env = test_env(),
chdir = TRUE,
desc = NULL,
wrap = TRUE,
shuffle = FALSE,
error_call = caller_env()
)
source_dir(
path,
pattern = "\\.[rR]$",
env = test_env(),
chdir = TRUE,
wrap = TRUE,
shuffle = FALSE
)
source_test_helpers(path = "tests/testthat", env = test_env())
source_test_setup(path = "tests/testthat", env = test_env())
source_test_teardown(path = "tests/testthat", env = test_env())
Arguments
path |
Path to files. |
env |
Environment in which to evaluate code. |
chdir |
Change working directory to |
desc |
A character vector used to filter tests. This is used to (recursively) filter the content of the file, so that only the non-test code up to and including the matching test is run. |
wrap |
Automatically wrap all code within |
shuffle |
If |
pattern |
Regular expression used to filter files. |
Mark a test as successful
Description
This is an older version of pass() that exists for backwards compatibility.
You should now use pass() instead.
Usage
succeed(message = "Success has been forced", info = NULL)
Arguments
message |
A character vector describing the failure. The first element should describe the expected value, and the second (and optionally subsequence) elements should describe what was actually seen. |
info |
Character vector continuing additional information. Included for backward compatibility only and new expectations should not use it. |
Does code take less than the expected amount of time to run?
Description
This is useful for performance regression testing.
Usage
takes_less_than(amount)
Arguments
amount |
maximum duration in seconds |
Run code before/after tests
Description
We no longer recommend using setup() and teardown(); instead
we think it's better practice to use a test fixture as described in
vignette("test-fixtures").
Code in a setup() block is run immediately in a clean environment.
Code in a teardown() block is run upon completion of a test file,
even if it exits with an error. Multiple calls to teardown() will be
executed in the order they were created.
Usage
teardown(code, env = parent.frame())
setup(code, env = parent.frame())
Arguments
code |
Code to evaluate |
env |
Environment in which code will be evaluated. For expert use only. |
Examples
## Not run:
# Old approach
tmp <- tempfile()
setup(writeLines("some test data", tmp))
teardown(unlink(tmp))
## End(Not run)
# Now recommended:
local_test_data <- function(env = parent.frame()) {
tmp <- tempfile()
writeLines("some test data", tmp)
withr::defer(unlink(tmp), env)
tmp
}
# Then call local_test_data() in your tests
Run code after all test files
Description
This environment has no purpose other than as a handle for withr::defer():
use it when you want to run code after all tests have been run.
Typically, you'll use withr::defer(cleanup(), teardown_env())
immediately after you've made a mess in a setup-*.R file.
Usage
teardown_env()
Run all tests in a directory
Description
This function is the low-level workhorse that powers test_local() and
test_package(). Generally, you should not call this function directly.
In particular, you are responsible for ensuring that the functions to test
are available in the test env (e.g. via load_package).
See vignette("special-files") to learn more about the conventions for test,
helper, and setup files that testthat uses, and what you might use each for.
Usage
test_dir(
path,
filter = NULL,
reporter = NULL,
env = NULL,
...,
load_helpers = TRUE,
stop_on_failure = TRUE,
stop_on_warning = FALSE,
package = NULL,
load_package = c("none", "installed", "source"),
shuffle = FALSE
)
Arguments
path |
Path to directory containing tests. |
filter |
If not |
reporter |
Reporter to use to summarise output. Can be supplied
as a string (e.g. "summary") or as an R6 object
(e.g. See Reporter for more details and a list of built-in reporters. |
env |
Environment in which to execute the tests. Expert use only. |
... |
Additional arguments passed to |
load_helpers |
Source helper files before running the tests? |
stop_on_failure |
If |
stop_on_warning |
If |
package |
If these tests belong to a package, the name of the package. |
load_package |
Strategy to use for load package code:
|
shuffle |
If |
Value
A list (invisibly) containing data about the test results.
Environments
Each test is run in a clean environment to keep tests as isolated as possible. For package tests, that environment inherits from the package's namespace environment, so that tests can access internal functions and objects.
Generate default testing environment.
Description
We use a new environment which inherits from globalenv() or a package
namespace. In an ideal world, we'd avoid putting the global environment on
the search path for tests, but it's not currently possible without losing
the ability to load packages in tests.
Usage
test_env(package = NULL)
Test package examples
Description
These helper functions make it easier to test the examples in a package. Each example counts as one test, and it succeeds if the code runs without an error. Generally, this is redundant with R CMD check, and is not recommended in routine practice.
Usage
test_examples(path = "../..")
test_rd(rd, title = attr(rd, "Rdfile"))
test_example(path, title = path)
Arguments
path |
For |
rd |
A parsed Rd object, obtained from |
title |
Test title to use |
Run tests in a single file
Description
Helper, setup, and teardown files located in the same directory as the
test will also be run. See vignette("special-files") for details.
Usage
test_file(
path,
reporter = default_compact_reporter(),
desc = NULL,
package = NULL,
shuffle = FALSE,
...
)
Arguments
path |
Path to file. |
reporter |
Reporter to use to summarise output. Can be supplied
as a string (e.g. "summary") or as an R6 object
(e.g. See Reporter for more details and a list of built-in reporters. |
desc |
Optionally, supply a string here to run only a single
test ( |
package |
If these tests belong to a package, the name of the package. |
shuffle |
If |
... |
Additional parameters passed on to |
Value
A list (invisibly) containing data about the test results.
Environments
Each test is run in a clean environment to keep tests as isolated as possible. For package tests, that environment inherits from the package's namespace environment, so that tests can access internal functions and objects.
Examples
path <- testthat_example("success")
test_file(path)
test_file(path, desc = "some tests have warnings")
test_file(path, reporter = "minimal")
Run all tests in a package
Description
-
test_local()tests a local source package. -
test_package()tests an installed package. -
test_check()checks a package duringR CMD check.
See vignette("special-files") to learn about the various files that
testthat works with.
Usage
test_package(package, reporter = check_reporter(), ...)
test_check(package, reporter = check_reporter(), ...)
test_local(
path = ".",
reporter = NULL,
...,
load_package = "source",
shuffle = FALSE
)
Arguments
package |
If these tests belong to a package, the name of the package. |
reporter |
Reporter to use to summarise output. Can be supplied
as a string (e.g. "summary") or as an R6 object
(e.g. See Reporter for more details and a list of built-in reporters. |
... |
Additional arguments passed to |
path |
Path to directory containing tests. |
load_package |
Strategy to use for load package code:
|
shuffle |
If |
Value
A list (invisibly) containing data about the test results.
R CMD check
To run testthat automatically from R CMD check, make sure you have
a tests/testthat.R that contains:
library(testthat)
library(yourpackage)
test_check("yourpackage")
Environments
Each test is run in a clean environment to keep tests as isolated as possible. For package tests, that environment inherits from the package's namespace environment, so that tests can access internal functions and objects.
Locate a file in the testing directory
Description
Many tests require some external file (e.g. a .csv if you're testing a
data import function) but the working directory varies depending on the way
that you're running the test (e.g. interactively, with devtools::test(),
or with R CMD check). test_path() understands these variations and
automatically generates a path relative to tests/testthat, regardless of
where that directory might reside relative to the current working directory.
Usage
test_path(...)
Arguments
... |
Character vectors giving path components. |
Value
A character vector giving the path.
Examples
## Not run:
test_path("foo.csv")
test_path("data", "foo.csv")
## End(Not run)
Run a test
Description
A test encapsulates a series of expectations about a small, self-contained
unit of functionality. Each test contains one or more expectations, such as
expect_equal() or expect_error(), and lives in a test/testhat/test*
file, often together with other tests that relate to the same function or set
of functions.
Each test has its own execution environment, so an object created in a test also dies with the test. Note that this cleanup does not happen automatically for other aspects of global state, such as session options or filesystem changes. Avoid changing global state, when possible, and reverse any changes that you do make.
Usage
test_that(desc, code)
Arguments
desc |
Test name. Names should be brief, but evocative. It's common to
write the description so that it reads like a natural sentence, e.g.
|
code |
Test code containing expectations. Braces ( |
Value
When run interactively, returns invisible(TRUE) if all tests
pass, otherwise throws an error.
Examples
test_that("trigonometric functions match identities", {
expect_equal(sin(pi / 4), 1 / sqrt(2))
expect_equal(cos(pi / 4), 1 / sqrt(2))
expect_equal(tan(pi / 4), 1)
})
## Not run:
test_that("trigonometric functions match identities", {
expect_equal(sin(pi / 4), 1)
})
## End(Not run)
Retrieve paths to built-in example test files
Description
testthat_examples() retrieves path to directory of test files,
testthat_example() retrieves path to a single test file.
Usage
testthat_examples()
testthat_example(filename)
Arguments
filename |
Name of test file |
Examples
dir(testthat_examples())
testthat_example("success")
Create a testthat_results object from the test results
as stored in the ListReporter results field.
Description
Create a testthat_results object from the test results
as stored in the ListReporter results field.
Usage
testthat_results(results)
Arguments
results |
a list as stored in ListReporter |
Value
its list argument as a testthat_results object
See Also
ListReporter
Default numeric tolerance
Description
testthat's default numeric tolerance is 1.4901161 × 10-8.
Usage
testthat_tolerance()
Evaluate an expectation multiple times until it succeeds
Description
If you have a flaky test, you can use try_again() to run it a few times
until it succeeds. In most cases, you are better fixing the underlying
cause of the flakeyness, but sometimes that's not possible.
Usage
try_again(times, code)
Arguments
times |
Number of times to retry. |
code |
Code to evaluate. |
Examples
usually_return_1 <- function(i) {
if (runif(1) < 0.1) 0 else 1
}
## Not run:
# 10% chance of failure:
expect_equal(usually_return_1(), 1)
# 1% chance of failure:
try_again(1, expect_equal(usually_return_1(), 1))
# 0.1% chance of failure:
try_again(2, expect_equal(usually_return_1(), 1))
## End(Not run)
Use Catch for C++ unit testing
Description
Add the necessary infrastructure to enable C++ unit testing
in R packages with Catch and
testthat.
Usage
use_catch(dir = getwd())
Arguments
dir |
The directory containing an R package. |
Details
Calling use_catch() will:
Create a file
src/test-runner.cpp, which ensures that thetestthatpackage will understand how to run your package's unit tests,Create an example test file
src/test-example.cpp, which showcases how you might use Catch to write a unit test,Add a test file
tests/testthat/test-cpp.R, which ensures thattestthatwill run your compiled tests during invocations ofdevtools::test()orR CMD check, andCreate a file
R/catch-routine-registration.R, which ensures that R will automatically register this routine whentools::package_native_routine_registration_skeleton()is invoked.
You will also need to:
Add xml2 to Suggests, with e.g.
usethis::use_package("xml2", "Suggests")Add testthat to LinkingTo, with e.g.
usethis::use_package("testthat", "LinkingTo")
C++ unit tests can be added to C++ source files within the
src directory of your package, with a format similar
to R code tested with testthat. Here's a simple example
of a unit test written with testthat + Catch:
context("C++ Unit Test") {
test_that("two plus two is four") {
int result = 2 + 2;
expect_true(result == 4);
}
}
When your package is compiled, unit tests alongside a harness
for running these tests will be compiled into your R package,
with the C entry point run_testthat_tests(). testthat
will use that entry point to run your unit tests when detected.
Functions
All of the functions provided by Catch are
available with the CATCH_ prefix – see
here
for a full list. testthat provides the
following wrappers, to conform with testthat's
R interface:
| Function | Catch | Description |
context | CATCH_TEST_CASE | The context of a set of tests. |
test_that | CATCH_SECTION | A test section. |
expect_true | CATCH_CHECK | Test that an expression evaluates to TRUE. |
expect_false | CATCH_CHECK_FALSE | Test that an expression evaluates to FALSE. |
expect_error | CATCH_CHECK_THROWS | Test that evaluation of an expression throws an exception. |
expect_error_as | CATCH_CHECK_THROWS_AS | Test that evaluation of an expression throws an exception of a specific class. |
In general, you should prefer using the testthat
wrappers, as testthat also does some work to
ensure that any unit tests within will not be compiled or
run when using the Solaris Studio compilers (as these are
currently unsupported by Catch). This should make it
easier to submit packages to CRAN that use Catch.
Symbol Registration
If you've opted to disable dynamic symbol lookup in your
package, then you'll need to explicitly export a symbol
in your package that testthat can use to run your unit
tests. testthat will look for a routine with one of the names:
C_run_testthat_tests
c_run_testthat_tests
run_testthat_tests
Assuming you have useDynLib(<pkg>, .registration = TRUE) in your package's
NAMESPACE file, this implies having routine registration code of the form:
// The definition for this function comes from the file 'src/test-runner.cpp',
// which is generated via `testthat::use_catch()`.
extern SEXP run_testthat_tests();
static const R_CallMethodDef callMethods[] = {
// other .Call method definitions,
{"run_testthat_tests", (DL_FUNC) &run_testthat_tests, 0},
{NULL, NULL, 0}
};
void R_init_<pkg>(DllInfo* dllInfo) {
R_registerRoutines(dllInfo, NULL, callMethods, NULL, NULL);
R_useDynamicSymbols(dllInfo, FALSE);
}
replacing <pkg> above with the name of your package, as appropriate.
See Controlling Visibility and Registering Symbols in the Writing R Extensions manual for more information.
Advanced Usage
If you'd like to write your own Catch test runner, you can
instead use the testthat::catchSession() object in a file
with the form:
#define TESTTHAT_TEST_RUNNER
#include <testthat.h>
void run()
{
Catch::Session& session = testthat::catchSession();
// interact with the session object as desired
}
This can be useful if you'd like to run your unit tests with custom arguments passed to the Catch session.
Standalone Usage
If you'd like to use the C++ unit testing facilities provided
by Catch, but would prefer not to use the regular testthat
R testing infrastructure, you can manually run the unit tests
by inserting a call to:
.Call("run_testthat_tests", PACKAGE = <pkgName>)
as necessary within your unit test suite.
See Also
Catch, the library used to enable C++ unit testing.
Verify output
Description
This function is superseded in favour of expect_snapshot() and friends.
This is a regression test that records interwoven code and output into a
file, in a similar way to knitting an .Rmd file (but see caveats below).
verify_output() is designed particularly for testing print methods and error
messages, where the primary goal is to ensure that the output is helpful to
a human. Obviously, you can't test that with code, so the best you can do is
make the results explicit by saving them to a text file. This makes the output
easy to verify in code reviews, and ensures that you don't change the output
by accident.
verify_output() is designed to be used with git: to see what has changed
from the previous run, you'll need to use git diff or similar.
Usage
verify_output(
path,
code,
width = 80,
crayon = FALSE,
unicode = FALSE,
env = caller_env()
)
Arguments
path |
Path to record results. This should usually be a call to |
code |
Code to execute. This will usually be a multiline expression
contained within |
width |
Width of console output |
crayon |
Enable cli/crayon package colouring? |
unicode |
Enable cli package UTF-8 symbols? If you set this to
|
env |
The environment to evaluate |
Syntax
verify_output() can only capture the abstract syntax tree, losing all
whitespace and comments. To mildly offset this limitation:
Strings are converted to R comments in the output.
Strings starting with
# are converted to headers in the output.
CRAN
On CRAN, verify_output() will never fail, even if the output changes.
This avoids false positives because tests of print methods and error
messages are often fragile due to implicit dependencies on other packages,
and failure does not imply incorrect computation, just a change in
presentation.
Watch a directory for changes (additions, deletions & modifications).
Description
This is used to power the auto_test() and
auto_test_package() functions which are used to rerun tests
whenever source code changes.
Usage
watch(path, callback, pattern = NULL, hash = TRUE)
Arguments
path |
character vector of paths to watch. Omit trailing backslash. |
callback |
function called every time a change occurs. It should
have three parameters: added, deleted, modified, and should return
|
pattern |
file pattern passed to |
hash |
hashes are more accurate at detecting changes, but are slower
for large files. When |
Details
Use Ctrl + break (windows), Esc (mac gui) or Ctrl + C (command line) to stop the watcher.
Mock functions in a package.
Description
with_mock() and local_mock() are now defunct, and can be replaced by
with_mocked_bindings() and local_mocked_bindings(). These functions only
worked by abusing of R's internals.
Usage
with_mock(..., .env = topenv())
local_mock(..., .env = topenv(), .local_envir = parent.frame())
Arguments
... |
named parameters redefine mocked functions, unnamed parameters will be evaluated after mocking the functions |
.env |
the environment in which to patch the functions, defaults to the top-level environment. A character is interpreted as package name. |
.local_envir |
Environment in which to add exit handler. For expert use only. |