Learn R Programming

testthat (version 3.0.3)

expect_snapshot: Snapshot testing

Description

[Experimental]

Snapshot tests (aka golden tests) are similar to unit tests except that the expected result is stored in a separate file that is managed by testthat. Snapshot tests are useful for when the expected value is large, or when the intent of the code is something that can only be verified by a human (e.g. this is a useful error message). Learn more in vignette("snapshotting").

  • expect_snapshot() captures all messages, warnings, errors, and output from code.

  • expect_snapshot_output() captures just output printed to the console.

  • expect_snapshot_error() captures just error messages.

  • expect_snapshot_value() captures the return value.

(These functions supersede verify_output(), expect_known_output(), expect_known_value(), and expect_known_hash().)

Usage

expect_snapshot(x, cran = FALSE, error = FALSE)

expect_snapshot_output(x, cran = FALSE)

expect_snapshot_error(x, class = "error", cran = FALSE)

expect_snapshot_value( x, style = c("json", "json2", "deparse", "serialize"), cran = FALSE, ... )

Arguments

x

Code to evaluate.

cran

Should these expectations be verified on CRAN? By default, they are not, because snapshot tests tend to be fragile because they often rely on minor details of dependencies.

error

Do you expect the code to throw an error? The expectation will fail (even on CRAN) if an unexpected error is thrown or the expected error is not thrown.

class

Expected class of condition, e.g. use error for errors, warning for warnings, message for messages. The expectation will always fail (even on CRAN) if a condition of this class isn't seen when executing x.

style

Serialization style to use:

...

For expect_snapshot_value() only, passed on to waldo::compare() so you can control the details of the comparison.

Workflow

The first time that you run a snapshot expectation it will run x, capture the results, and record in tests/testthat/snap/{test}.json. Each test file gets its own snapshot file, e.g. test-foo.R will get snap/foo.json.

It's important to review the JSON files and commit them to git. They are designed to be human readable, and you should always review new additions to ensure that the salient information has been captured. They should also be carefully reviewed in pull requests, to make sure that snapshots have updated in the expected way.

On subsequent runs, the result of x will be compared to the value stored on disk. If it's different, the expectation will fail, and a new file snap/{test}.new.json will be created. If the change was deliberate, you can approve the change with snapshot_accept() and then the tests will pass the next time you run them.

Note that snapshotting can only work when executing a complete test file (with test_file(), test_dir(), or friends) because there's otherwise no way to figure out the snapshot path. If you run snapshot tests interactively, they'll just display the current value.