The col_count_match()
validation function, the expect_col_count_match()
expectation function, and the test_col_count_match()
test function all
check whether the column count in the target table matches that of a
comparison table. The validation function can be used directly on a data
table or with an agent object (technically, a ptblank_agent
object)
whereas the expectation and test functions can only be used with a data
table. As a validation step or as an expectation, there is a single test unit
that hinges on whether the column counts for the two tables are the same
(after any preconditions
have been applied).
col_count_match(
x,
count,
preconditions = NULL,
actions = NULL,
step_id = NULL,
label = NULL,
brief = NULL,
active = TRUE
)expect_col_count_match(object, count, preconditions = NULL, threshold = 1)
test_col_count_match(object, count, preconditions = NULL, threshold = 1)
For the validation function, the return value is either a
ptblank_agent
object or a table object (depending on whether an agent
object or a table was passed to x
). The expectation function invisibly
returns its input but, in the context of testing data, the function is
called primarily for its potential side-effects (e.g., signaling failure).
The test function returns a logical value.
A pointblank agent or a data table
obj:<ptblank_agent>|obj:<tbl_*>
// required
A data frame, tibble (tbl_df
or tbl_dbi
), Spark DataFrame
(tbl_spark
), or, an agent object of class ptblank_agent
that is
commonly created with create_agent()
.
The count comparison
scalar<numeric|integer>|obj:<tbl_*>
// required
Either a literal value for the number of columns, or, a table to compare
against the target table in terms of column count values. If supplying a
comparison table, it can either be a table object such as a data frame, a
tibble, a tbl_dbi
object, or a tbl_spark
object. Alternatively, a
table-prep formula (~ <tbl reading code>
) or a function (
function() <tbl reading code>
) can be used to lazily read in the
comparison table at interrogation time.
Input table modification prior to validation
<table mutation expression>
// default: NULL
(optional
)
An optional expression for mutating the input table before proceeding with
the validation. This can either be provided as a one-sided R formula using
a leading ~
(e.g., ~ . %>% dplyr::mutate(col = col + 10)
or as a
function (e.g., function(x) dplyr::mutate(x, col = col + 10)
. See the
Preconditions section for more information.
Thresholds and actions for different states
obj:<action_levels>
// default: NULL
(optional
)
A list containing threshold levels so that the validation step can react
accordingly when exceeding the set levels for different states. This is to
be created with the action_levels()
helper function.
Manual setting of the step ID value
scalar<character>
// default: NULL
(optional
)
One or more optional identifiers for the single or multiple validation
steps generated from calling a validation function. The use of step IDs
serves to distinguish validation steps from each other and provide an
opportunity for supplying a more meaningful label compared to the step
index. By default this is NULL
, and pointblank will automatically
generate the step ID value (based on the step index) in this case. One or
more values can be provided, and the exact number of ID values should (1)
match the number of validation steps that the validation function call will
produce (influenced by the number of columns
provided), (2) be an ID
string not used in any previous validation step, and (3) be a vector with
unique values.
Optional label for the validation step
vector<character>
// default: NULL
(optional
)
Optional label for the validation step. This label appears in the agent report and, for the best appearance, it should be kept quite short. See the Labels section for more information.
Brief description for the validation step
scalar<character>
// default: NULL
(optional
)
A brief is a short, text-based description for the validation step. If
nothing is provided here then an autobrief is generated by the agent,
using the language provided in create_agent()
's lang
argument (which
defaults to "en"
or English). The autobrief incorporates details of the
validation step so it's often the preferred option in most cases (where a
label
might be better suited to succinctly describe the validation).
Is the validation step active?
scalar<logical>
// default: TRUE
A logical value indicating whether the validation step should be active. If
the validation function is working with an agent, FALSE
will make the
validation step inactive (still reporting its presence and keeping indexes
for the steps unchanged). If the validation function will be operating
directly on data (no agent involvement), then any step with active = FALSE
will simply pass the data through with no validation whatsoever.
Aside from a logical vector, a one-sided R formula using a leading ~
can
be used with .
(serving as the input data table) to evaluate to a single
logical value. With this approach, the pointblank function
has_columns()
can be used to determine whether to make a validation step
active on the basis of one or more columns existing in the table
(e.g., ~ . %>% has_columns(c(d, e))
).
A data table for expectations or tests
obj:<tbl_*>
// required
A data frame, tibble (tbl_df
or tbl_dbi
), or Spark DataFrame
(tbl_spark
) that serves as the target table for the expectation function
or the test function.
The failure threshold
scalar<integer|numeric>(val>=0)
// default: 1
A simple failure threshold value for use with the expectation (expect_
)
and the test (test_
) function variants. By default, this is set to 1
meaning that any single unit of failure in data validation results in an
overall test failure. Whole numbers beyond 1
indicate that any failing
units up to that absolute threshold value will result in a succeeding
testthat test or evaluate to TRUE
. Likewise, fractional values
(between 0
and 1
) act as a proportional failure threshold, where 0.15
means that 15 percent of failing test units results in an overall test
failure.
The types of data tables that are officially supported are:
data frames (data.frame
) and tibbles (tbl_df
)
Spark DataFrames (tbl_spark
)
the following database tables (tbl_dbi
):
PostgreSQL tables (using the RPostgres::Postgres()
as driver)
MySQL tables (with RMySQL::MySQL()
)
Microsoft SQL Server tables (via odbc)
BigQuery tables (using bigrquery::bigquery()
)
DuckDB tables (through duckdb::duckdb()
)
SQLite (with RSQLite::SQLite()
)
Other database tables may work to varying degrees but they haven't been formally tested (so be mindful of this when using unsupported backends with pointblank).
Providing expressions as preconditions
means pointblank will preprocess
the target table during interrogation as a preparatory step. It might happen
that this particular validation requires some operation on the target table
before the column count comparison takes place. Using preconditions
can be
useful at times since since we can develop a large validation plan with a
single target table and make minor adjustments to it, as needed, along the
way.
The table mutation is totally isolated in scope to the validation step(s)
where preconditions
is used. Using dplyr code is suggested here since
the statements can be translated to SQL if necessary (i.e., if the target
table resides in a database). The code is most easily supplied as a one-sided
R formula (using a leading ~
). In the formula representation, the .
serves as the input data table to be transformed. Alternatively, a function
could instead be supplied.
Often, we will want to specify actions
for the validation. This argument,
present in every validation function, takes a specially-crafted list object
that is best produced by the action_levels()
function. Read that function's
documentation for the lowdown on how to create reactions to above-threshold
failure levels in validation. The basic gist is that you'll want at least a
single threshold level (specified as either the fraction of test units
failed, or, an absolute value), often using the warn_at
argument. Using
action_levels(warn_at = 1)
or action_levels(stop_at = 1)
are good choices
depending on the situation (the first produces a warning, the other
stop()
s).
label
may be a single string or a character vector that matches the number
of expanded steps. label
also supports {glue}
syntax and exposes the
following dynamic variables contextualized to the current step:
"{.step}"
: The validation step name
The glue context also supports ordinary expressions for further flexibility
(e.g., "{toupper(.step)}"
) as long as they return a length-1 string.
Want to describe this validation step in some detail? Keep in mind that this
is only useful if x
is an agent. If that's the case, brief
the agent
with some text that fits. Don't worry if you don't want to do it. The
autobrief protocol is kicked in when brief = NULL
and a simple brief will
then be automatically generated.
A pointblank agent can be written to YAML with yaml_write()
and the
resulting YAML can be used to regenerate an agent (with yaml_read_agent()
)
or interrogate the target table (via yaml_agent_interrogate()
). When
col_count_match()
is represented in YAML (under the top-level steps
key
as a list member), the syntax closely follows the signature of the validation
function. Here is an example of how a complex call of col_count_match()
as
a validation step is expressed in R code and in the corresponding YAML
representation.
R statement:
agent %>%
col_count_match(
count = ~ file_tbl(
file = from_github(
file = "sj_all_revenue_large.rds",
repo = "rich-iannone/intendo",
subdir = "data-large"
)
),
preconditions = ~ . %>% dplyr::filter(a < 10),
actions = action_levels(warn_at = 0.1, stop_at = 0.2),
label = "The `col_count_match()` step.",
active = FALSE
)
YAML representation:
steps:
- col_count_match:
count: ~ file_tbl(
file = from_github(
file = "sj_all_revenue_large.rds",
repo = "rich-iannone/intendo",
subdir = "data-large"
)
)
preconditions: ~. %>% dplyr::filter(a < 10)
actions:
warn_fraction: 0.1
stop_fraction: 0.2
label: The `col_count_match()` step.
active: false
In practice, both of these will often be shorter. Arguments with default
values won't be written to YAML when using yaml_write()
(though it is
acceptable to include them with their default when generating the YAML by
other means). It is also possible to preview the transformation of an agent
to YAML without any writing to disk by using the yaml_agent_string()
function.
Create a simple table with three columns and three rows of values:
tbl <-
dplyr::tibble(
a = c(5, 7, 6),
b = c(7, 1, 0),
c = c(1, 1, 1)
)
tbl
#> # A tibble: 3 x 3
#> a b c
#> <dbl> <dbl> <dbl>
#> 1 5 7 1
#> 2 7 1 1
#> 3 6 0 1
Create a second table which is quite different but has the same number of
columns as tbl
.
tbl_2 <-
dplyr::tibble(
e = c("a", NA, "a", "c"),
f = c(2.6, 1.2, 0, NA),
g = c("f", "g", "h", "i")
)
tbl_2
#> # A tibble: 4 x 3
#> e f g
#> <chr> <dbl> <chr>
#> 1 a 2.6 f
#> 2 <NA> 1.2 g
#> 3 a 0 h
#> 4 c NA i
We'll use these tables with the different function variants.
agent
with validation functions and then interrogate()
Validate that the count of columns in the target table (tbl
) matches that
of the comparison table (tbl_2
).
agent <-
create_agent(tbl = tbl) %>%
col_count_match(count = tbl_2) %>%
interrogate()
Printing the agent
in the console shows the validation report in the
Viewer. Here is an excerpt of validation report, showing the single entry
that corresponds to the validation step demonstrated here.
agent
)
This way of using validation functions acts as a data filter: data is passed
through but should stop()
if there is a single test unit failing. The
behavior of side effects can be customized with the actions
option.
tbl %>% col_count_match(count = tbl_2)
#> # A tibble: 3 x 3
#> a b c
#> <dbl> <dbl> <dbl>
#> 1 5 7 1
#> 2 7 1 1
#> 3 6 0 1
With the expect_*()
form, we would typically perform one validation at a
time. This is primarily used in testthat tests.
expect_col_count_match(tbl, count = tbl_2)
With the test_*()
form, we should get a single logical value returned to
us.
tbl %>% test_col_count_match(count = 3)
#> [1] TRUE
2-32
Other validation functions:
col_exists()
,
col_is_character()
,
col_is_date()
,
col_is_factor()
,
col_is_integer()
,
col_is_logical()
,
col_is_numeric()
,
col_is_posix()
,
col_schema_match()
,
col_vals_between()
,
col_vals_decreasing()
,
col_vals_equal()
,
col_vals_expr()
,
col_vals_gt()
,
col_vals_gte()
,
col_vals_in_set()
,
col_vals_increasing()
,
col_vals_lt()
,
col_vals_lte()
,
col_vals_make_set()
,
col_vals_make_subset()
,
col_vals_not_between()
,
col_vals_not_equal()
,
col_vals_not_in_set()
,
col_vals_not_null()
,
col_vals_null()
,
col_vals_regex()
,
col_vals_within_spec()
,
conjointly()
,
row_count_match()
,
rows_complete()
,
rows_distinct()
,
serially()
,
specially()
,
tbl_match()